This section aims to address the common as well as not-so-common queries and issues you might encounter while using the RudderStack platform.
This section contains some commonly asked questions about the RudderStack installation and setup procedure.
To quickly get started, you can use our hosted control plane. Here you can configure your sources and destinations with ease. Alternatively, you can use the open-source RudderStack Config Generator. The hosted control plane has some additional features like Live Event Debugger and Transformations so we highly recommend you use that.
The token is when you use the hosted control plane. It is a unique identifier for your configuration setting which the RudderStack data plane can pull. Hence it is required.
Yes, you can. Many people who don't want to sign up on our SaaS control plane use the RudderStack Config Generator to configure their sources and destinations. Please follow this setup guide to install and set up RudderStack in no time.
RudderStack lets you fill in the values with variable names. These variables should be prepended with
env.. You can populate these secrets as environment variables and run the data plane. This is an Enterprise only feature.
For example, you are configuring Amazon S3 as a destination. But you don't want to enter the AWS access key credentials in the destination settings. Fill the value with a placeholder that starts with
env. It should look like this
env.MY_AWS_ACCESS_KEY. Then set the value of the environment variable
MY_AWS_ACCESS_KEYwhile running the data plane.
This section contains some commonly asked questions about the RudderStack Server.
RudderStack's hosted solution is running on AWS. We run on AWS EKS, and our EKS cluster spans 3 availability zones (east-1a, east-1b, east-1c).
The number of events that a single RudderStack node can handle will depend on the destinations that you are sending the event data to. It depends on the transformations that you are running. However, here are some ballpark figures:
Dumping to S3 - Approximately 1.5K events/sec
Dumping to Warehouse - Approximately 1K events/sec
Dumping to Warehouse + a couple of cloud destinations - Approximately 750 events/sec
Please note that these are conservative numbers. A single RudderStack Node can handle 5x+ event load at the peak, just that those events will get cached locally and then drained as per regular throughput.
Yes, you are right. We don't explicitly list out any server-side sources, though all sources are pretty much the same. When you create a source, you get a
writeKey which you can use to send events to RudderStack Data Plane. The
There is a config variable to configure the number of workers that send data to destinations. The default value is 64, which itself is an aggressive number. You can increase the number of workers, but some destinations generally throttle the number of requests per account.
You can login to PostgreSQL and check the tables and the number of rows in each table. That should give you a rough idea of the number of events that are being sent.
This section contains some commonly-asked questions about the RudderStack Config Generator.
This issue can occur when you have some old data left in your browser's local storage. Please try using the latest version of the RudderStack Config Generator after clearing the browser cache and local storage. In case it still does not work, please feel free to contact us.
For self-hosting the UI, you can use the RudderStack Config Generator.
Please note that the open-source config-generator will only generate the source-destination configurations which are required by RudderStack. RudderStack's hosted control plane has more features like event transformations, source and destination live events debuggers which is free to use.
This section aims to address the commonly asked questions about RudderStack's Transformation feature.
The batching is done on a per end-user level. All the events from a given end-user are batched and then sent to the transformation function. The batching process is controlled via the following three parameters in
processSessions = False (make it
True for batching)
sessionThresholdEvents = 100
sessionInactivityThresholdInS = 120
Events from an end-user are batched till we have 100 events or 120 seconds of inactivity since the last event. This list is then passed to the transformation function.
There is parallelism in calling the transformation function, so ideally it should not slow the system. However, if you have a really slow transformation, you can increase the number of transformation workers by tweaking
Number of user events that are batched together can be configured with
sessionInactivityThresholdInS. Higher the numbers, longer the events are grouped into a session. It is important to note that these will increase the memory footprint proportionally.
Each execution of a transformation happens in a sandboxed V8 isolate. We do not support sharing data or connections across executions.
e.request_ip. But when I look at the live event stream for the
pagecalls we have coming in, I do not see that key in the payload. Is it just not shown or is it not there?
e.request_ip is populated by the RudderStack data plane. It’ll always be there, but we don’t show that parameter in our Live Events tab. The Live Events tab is to get the information you are passing to the data plane, whereas
request_ip is something that data plane populates. So, we don’t show it in the live events.
RudderStack allows you to implement your own custom transformation functions that leverage the event data, to implement specific use-cases based on your business requirements. For more information, you can refer to our documentation on adding custom transformations.
This section aims to address the commonly asked questions about RudderStack's SDKs for your web and mobile apps.
You should use the
track method parameters specific to eCommerce, you can refer our Google Analytics Enhanced eCommerce guide.
RudderStack currently supports integrations with over 40 marketing and analytics platforms. This section aims to address the commonly asked questions about these integrations.
Yes, you can use the same destination. It should work without any problem.
You can override the UI set sync frequency by setting
warehouseSyncFreqIgnore to true in
config.toml (this also can be done through the
env variable as described in
config.toml comments). You can set your desired frequency by changing the
At the infrastructure layer, we run on a multi-availability zone EKS cluster. So hardware failures, if any, are handled by Kubernetes by relocating pods, and so on.
At the application level, RudderStack has a couple of failure modes:
Normal mode is where everything is normal.
If for reason it fails (e.g. because of a bug), we bring the system in a Degraded mode, where it processes incoming requests but doesn't send it to destinations.
If that fails too (e.g. internal database corruption), we bring the system in a Maintenance mode where we save the previous state (which can be debugged and processed) and start from scratch - still receiving requests.
All our SDKs also have failure handling. They can store events in local storage and retry on failure.
RudderStack provides isolation between the data and control planes. For example, if the control plane (where you manage the source and destination configurations) goes offline, the data plane continues to operate.
All this is done to ensure that RudderStack can always receive events, and no events are lost.
Adding a new node requires a bit of downtime. However, we have built RudderStack it to minimize this downtime as much as possible. When a new node is added, we need to re-balance users across nodes (to keep event ordering). While the re-balancing is happening (can take a few minutes), RudderStack does not send events to downstream destinations, but continues to receive events so that your SDKs won't see any failures (ignoring the small ELB switch over time). Also, the SDKs have local caching capability & retries built-in. So even if they see a failure, no events are lost.
The final downstream destination APIs (Amplitude, Braze etc) can be unavailable or send a failure code for any number of reasons. RudderStack retries these kind of jobs depending on the type of failure
Retry for a time window of 3 hours with exponential backoff and a minimum of 3 times
Retry for a time window of 3 hours with exponential backoff and a minimum of 3 times
Retry for a minimum of 3 times without any backoff
The above behavior is configurable via config variables in
config.toml (Refer here)
[Router]retryTimeWindowInMins = 180minRetryBackoffInS = 10maxRetryBackoffInS = 300maxFailedCountForJob = 8
The final downstream destination APIs (Amplitude, Customer.io etc) have limits on the number of events they accept at an account level or at an user/device level. We try to throttle the API requests as per the final destination limits.
Some examples are:
These limits can also be configured using config variables in
config.toml or using environment variables as described in comments here
# Below configuration throttles request to Amplitude at 1000 req/s for the account# and 10 req/s for individual user/device[Router.throttler.AM]limit = 1000timeWindowInS = 1userLevelThrottling = trueuserLevelLimit = 10userLevelTimeWindowInS = 1