A service for maintaining and securely sharing information on top tier companies
- Running locally
- Creating migrations
- Architecture (network) diagram
- Infrastructure naming
- Logs
- Provisioning infrastructure
- Licenses and attributions
docker compose down && docker compose up --build
Visit http://localhost:8000/. Note that on every start of the application, a (mock) SSO user is created or updated with access to the site through the Django group 'Basic access', as well as staff and superuser access, and this user is automatically logged in.
The above uses a development config, which has differences to production around this user setup and around static assets. To use a config closer to production run:
docker compose down && docker compose --profile prod up --build
docker compose run --build web-dev python manage.py makemigrations
How the infrastructure components are connected together can be seen in the following network diagram.
All infrastructure is named in the pattern <prefix>-<name>-<suffix>
.
<prefix>
is the name of the service, by defaultstrategic-companies-list
<name>
is an optional descriptive name for the specific piece of infrastructure<suffix>
is the name of the environment, for exampleprod
For example, the name of the production ECS cluster would be strategic-companies-list-prod
.
There are 9 types of logs, sent to 7 locations:
- VPC flow log, saved to the S3 bucket:
<prefix>-vpc-flow-log-<suffix>
- ALB connection logs, saved to the S3 bucket
<prefix>-lb-connection-logs-<suffix>
- ALB access logs sent, ot the S3 bucket
<prefix>-lb-access-logs-<suffix>
- Logs from standard out and standard error of the ECS task itself, saved to the CloudWatch log group
<prefix>-ecs-task-<suffix>
. These contain:- nginx logs, configured to use the CloudFront-Viewer-Address header for its IP address
- Web server logs
- Django logs
- PostgreSQL logs for the PostgreSQL database, saved to the CloudWatch log group
/aws/rds/instance/<prefix>-<suffix>/postgresql
- Upgrade logs for the PostgreSQL database, saved to the CloudWatch log group
/aws/rds/instance/<prefix>-<suffix>/upgrade
- Logs from CodeBuild, saved to the CloudWatch log group
<prefix>-codebuild-<suffix>
Both CloudWatch and S3 logs have a retention of 3653 days (~10 years).
Important
The instructions below use Terraform to provision infrastructure for Strategic Companies List environments (dev, prod, etc), the "entry-point" of each is an internet-facing Application Load Balancer (ALB). However, for flexibility, especially in multi-account setups, there are manual steps needed after the Terraform has run to make this ALB actually acessible. Be sure to run terraform output strategic_companies_list
after provisioning the environment to find what these steps are.
Note that manual changes to secrets in AWS Secrets manager typically feed a forced deployment of the ECS service to take effect.
In future versions it is possible that these steps are incorportated into the Terraform.
AWS infrastructure for running the strategic-companies-list is defined through Terraform in infra/, although each environment needs manual bootstrapping. There are various options, but one possibility:
-
Create or get access to an AWS account for running the infrastructure.
-
Install the AWS CLI if you don't have it already.
-
Create an AWS profile configured for the AWS CLI, for example
my-profile-name
. See the the AWS Cli documentation for more details. -
For each environment, create an S3 bucket of any name (for storing the Terraform state file).
-
For each environment, create a DynamoDB table of any name, and with a a string
LockID
partition key (for storing a lock that prevents multiple changes to infrastructure at once). -
For each environment, create a directory outside of a cloned copy of this repository. A typical file layout would be the following.
any-folder ├── stategic-companies-list (a cloned copy of this repository) └── stategic-companies-list-deploy ├── dev └── prod
-
In each environment directory, create a
main.tf
as follows, replacing the<...>`` patterns with the S3 bucket name, DynamoDB table, and environment name, and populating the
module` with additional variables defined in infra/variables.tfterraform { backend "s3" { region = "eu-west-2" encrypt = true bucket = "<bucket)name>" key = "<environment_name>.tfstate" dynamodb_table = "<dynamodb_table_name>" } } provider "aws" { region = "eu-west-2" } module "strategic_companies_list" { source = "../../strategic-companies-list/infra" external_domain_name = "scl.my-domain.gov.uk" # The user-facing domain internal_domain_name = "scl.prod.my-domain.digital" # The domain of the ALB authbroker_url = "https://sso-domain.gov.uk" # The URL of Staff SSO authbroker_client_id = "12345abcdef" # The Client ID of the app in Staff SSO } output "strategic_companies_list" { value = module.strategic_companies_list sensitive = true }
-
If you configured AWS CLI though SSO access, run:
AWS_PROFILE=my-profile-name aws sso login
-
In each new environment's directory run
AWS_PROFILE=my-profile-name terraform init
and then
AWS_PROFILE=my-profile-name terraform apply
-
Find the manual steps needed:
AWS_PROFILE=my-profile-name terraform output strategic_companies_list
- (Optional) Use direnv to avoid having to use
AWS_PROFILE=my-profile-name
for future terraform commands.
The code of Strategic Companies List is licensed under the MIT License.
However, the Secure Companies List logo is not licensed in the MIT License: it is a modified version of Nursila's strategy icon, purchased via a Noun project subscription.