This repository contains a reliability-focused implementation of the Prima Senior SRE Tech Challenge. The solution was tested on WSL/Ubuntu24.04LTS.
The system exposes a simple UI powered by React, a simple user management API backed by DynamoDB and S3, designed with scalability, security, and operational reliability in mind.
You need to have a Linux Distribution to run the solution, if you are using Windows, you can choose WSL/Ubuntu(,which is my primary work environment). Please check whether you have installed the followings:
- Python
- Docker
- kubectl
- minikube
- Helm CLI
- Flux CLI
- MiniKube
- LocalStack
- nvm, node 20.0
I chose VS code for editor.
Client
↓
Kubernetes Service (Ingress)
↓
Python API (FastAPI)
↓
AWS Services
├─ DynamoDB (user metadata)
└─ S3 (avatar storage)
Infrastructure is provisioned via Terraform and deployed into Kubernetes using Helm.
I did not design a LoadBalancer because I am using Minikube for K8s. All services in K8s are exposed via ClusterIP using Port-forward
- FastAPI was chosen for its async support, performance, and built-in OpenAPI documentation.
- DynamoDB provides fully managed, highly available storage with no operational overhead.
- S3 ensures durable and scalable object storage for user avatars.
- Docker enables immutable builds and reproducible deployments.
- Kubernetes + Helm provide scalability, self-healing, and configuration management.
- FluxCD provide GitOps workflow to achive reconciliation automatically
- OpenTofu provide integration seamlessly with existing GitOps workflow
- LocalStack provide a local, free cost environment for DynamoDB, S3 simulation
Returns a list of all users.
Example response:
[
{
"name": "Test User",
"email": "test-user@prima.it",
"avatar_url": "https://..."
}
]Creates a new user and uploads an avatar image.
- Multipart form data
- Fields: name, email, avatar
Deletes a new user
- Fields: email
- Docker
- Docker Compose
- Python 3.11+
- Terraform
- Localstack
First, you need to run a Localstack with Docker.
docker pull localstack/localstack
docker run -d \
--name localstack \
-p 4566:4566 \
-p 4571:4571 \
localstack/localstack
docker logs -f localstack
Simply run the follwing commands to run the Solution.
- make setup # Running setup
- make up # Starting services
- make down # Stopping services
- make clean # Cleaing up
I created several commmands to simplify the development.
- make dev-tools # One time but to install dev tools
- make format # Format python code
- make lint # Lint python code
- make test # Run test for python code
- tf-init-localstack # Running terraform init (localstack)
- tf-validate-localstack # Running terraform validate (localstack)
- tf-plan-localstack # Running terraform plan (localstack)- Minikube
- Kubectl
- Helm CLI
- Flux CLI
First, you need to run Minkube
minikube start --driver=docker --profile=flux-cluster --cpus 4 --memory 8196
Second, Flux bootstrap for GitHub (GitOps workflow)
export GITHUB_TOKEN=[GITHUB_PAT_TOKEN]
flux bootstrap github \
--token-auth \
--owner=[GITHUB_ACCOUNT] \
--repository=gitops-terraform-workflow \
--branch=main \
--path=clusters/local \
--personal
Third, install TF-Controller
kubectl apply -f https://raw.githubusercontent.com/flux-iac/tofu-controller/main/docs/release.yaml
kubectl apply -f cluster/dev/github-repository-secret.yaml
kubectl apply -f cluster/dev/ingress.yaml
kubectl apply -f cluster/dev/dev-iac-automation.yaml
Finally, you need to create a custom Docker image to run the TF-controller.
docker build -f ./Docker/Dockerfile.tfcontroller -t <savagame>/gitops-terraform:tf_az_cli_1_1 .
docker push <savagame>/gitops-terraform:tf_az_cli_1_1
After that, you need to do the following to experiment the solution.
- kubectl port-forward svc/prima-api-prima-api 8000:8000 # expose APIs
- kubectl port-forward svc/prima-api-prima-api-frontend 8080:80 # expose Front-end
- kubectl port-forward svc/localstack 4566:4566 -n localstack # expose localstack
Terraform provisions:
- DynamoDB table for users
- S3 bucket for avatars
- IAM roles and policies
- IAM Roles for Service Accounts (IRSA) for Kubernetes
Using IRSA avoids static AWS credentials and follows AWS security best practices by granting least-privilege permissions to the pod.
The application is deployed using a custom Helm chart.
- Deployment with configurable replicas
- Liveness & readiness probes
- Horizontal Pod Autoscaler (HPA)
- Resource requests & limits
- Environment-based configuration
A YAML-based CI/CD pipeline is provided using GitHub Actions.
- Lint and test Python code
- Build Docker image
- Security scanning (optional)
- Helm chart validation and change tag of values.yaml with the latest image
- Shift-left testing improves reliability
- Immutable Docker images reduce deployment risk
- YAML-based workflows ensure transparency and portability
The following improvements would be implemented in a real production environment:
- HTTPS via Ingress + cert-manager
- WAF and rate limiting
- Centralized logging (ELK / CloudWatch)
- Metrics with Prometheus & Grafana
- Distributed tracing (OpenTelemetry)
- Blue/Green or Canary deployments
This project demonstrates a reliability-driven approach to application delivery, focusing on automation, scalability, security, and operational excellence. All components are designed to be production-ready while remaining simple and maintainable.