Configure SecureAuth for Production Readiness
Tailor SecureAuth on Kubernetes via gitops to ensure it's optimized for production environments.
Overview
All production readiness aspects are marked in the acp-on-k8s repository with comments in the following format: # Production Readiness - <name>
. These comments will guide you through the repository where these changes could be applied. The comments act as a guide and do not mark every single place where custom modifications could be necessary.
Use Your Own Source
Using your own repository when employing the GitOps approach is paramount to maintaining transparency, security, and control. By doing so, you ensure that you're pulling from a trusted, self-managed source, and can implement changes swiftly without relying on third-party repositories. Importantly, there are intentional limitations in the stack design, serving to guide and educate users on how to make modifications effectively and safely.
Adjust Pod Resources
By default, the stack has resource limits designed to be compatible with individual PCs. For production, it's necessary to re-evaluate and adjust these limits based on the anticipated workload. Specifically, all databases in the stack are capped at 2GB of RAM, a setting not suited for high-load production scenarios. Adapting these values is vital for maintaining optimal database performance.
Adjust Pod Count
The default configuration for compute resources, particularly SecureAuth and FaaS pods, is set with scaling limits to cater to typical usage scenarios. While the broader stack is designed for high availability (HA), these specific pod count limits can become a bottleneck during heavy workloads in production settings. As you transition to a production environment, it's essential to evaluate the demands of your application and adjust the maximum scaling limits of these compute resources to ensure seamless operation even under peak loads.
Use Proper StorageClass
A well-optimized storage solution is pivotal for performance and reliability. Ensure you utilize a suitable StorageClass
, especially for datastores. Parameters such as throughput and IOPS can significantly impact performance, so they should be selected based on your application needs. Remember that the default storage settings on your local hardware or cloud provider might suffice for smaller workloads but may not be adequate for larger, production-scale demands. In such scenarios, it's advisable to opt for a dedicated StorageClass
with enhanced performance specifications to handle the increased load efficiently.
Use Your Own Secret
The repository contains encrypted files that are purely exemplary. The accessibility of the private encryption key in the repo means it's decodable by anyone. For heightened security in production, always rotate all secrets and integrate your unique encryption key. For added robustness, Flux offers integration with cloud providers, using tools like sops
for seamless cloud encryption key management.
Adopt Organization-Specific Domain
The default domains provided use the .local
TLD (Top-Level Domain), which is not routable on the public internet and is reserved for local network use. Before deploying to production, replace these .local domain references with a valid, publicly routable domain name that your organization owns or controls. This ensures that your services are accessible on the internet and operate securely under your designated domain.
Helm Charts Configuration
Utilizing Helm for deploying SecureAuth is integral to the GitOps approach. Dive deep into Helm configuration documentation to mold the deployment according to your unique requirements. Always take time to shift through other charts and parameters, refining them for your specific solution.
TimescaleDB Image
Due to initial incompatibility with Apple's M1 processors, we used the TimescaleDB nightly build. For production, transition to an official release build.
Remove Unnecessary Components
For an efficient production setup, eliminate any redundant components from your deployment. For instance, if you're employing external databases, removing the internal ones from the deployment conserves resources, leading to optimized performance and cost benefits.
Additional Cloud Components
Deploying on cloud infrastructure involves a deeper consideration of various components and configurations:
Networking Driver: Ensure seamless and efficient inter-service communication.
Storage Driver: Opt for storage solutions tailored for your cloud provider, enhancing both performance and resilience.
Load Balancer Driver: Distribute incoming traffic uniformly across multiple targets.
Node Autoscaling: Scale node numbers dynamically based on workload demands.
DNS Provider: Align your DNS solution with your ingress setup to facilitate seamless service discovery.