Information Security
Data collection
Plandek is designed to only collate metadata that does not contain sensitive client IP
Data sources
Plandek data is sourced from the underlying tools used across the software delivery process. Key data sources are:
Workflow management tools (Jira, Azure)
Code repositories (Github, Gitlab, Bitbucket)
CI/CD tools (Jenkins, CircleCI)
Incident tools (PagerDuty, New Relic)
Deployment API pipeline (Webhook call)
Time series API (Webhook call)
Types of data collected
Plandek only stores metadata regarding the software development process and avoids storage of data which may hold sensitive IP (as such Plandek does NOT store data such as ticket descriptions, source code, commit messages, attachments or comments).
Examples of metadata which Plandek may store include information such as:
Transitions of issues
Time of commits and pull requests
The person who performed the action
Data provided by Webhook regarding the start of deployments or time series data (as provided by the customer)
For an exhaustive listing of data gathered, see the data stored per the gatherer document.
Data transport encryption
Access to all the software development tools such as Jira, Github, Bitbucket and CircleCI is performed over encrypted connections (HTTPS using ciphers and encryption types as listed below) using supplied credentials or API keys.
Jira: TLS 1.3, X25519, AES_128_GCM
Azure: TLS 1.3, X25519, AES_256_GCM
Github: TLS 1.3, X25519, AES_128_GCM
Bitbucket (cloud): TLS 1.3, P-256, AES_128_GCM
Gitlab: TLS 1.2, ECDHE_RSA with X25519, AES_128_GCM
Jenkins: TLS 1.2, ECDHE_RSA with P-256, AES_256_GCM
CircleCi: TLS 1.3, X25519, AES_128_GCM
Pagerduty: TLS 1.3, X25519, AES_128_GCM
New relic: TLS 1.3, X25519, AES_128_GCM
Types of data collected
In order to pass deployment-related data to Plandek, via an encrypted API connection (HTTPS using TLS 1.2), an access token must also be presented to authenticate the caller.
Plandek will never change data in the source systems and can use read-only accounts.
Plandek analysis is performed on short-lived Kubernetes pods, with the raw data being purged once the analysis and indexing are complete.
For customers with specific privacy requirements, the gathering stage of the process can be run on the customer side so the intellectual property is never on Plandek servers.
All-access credentials which are held are encrypted with Google Key Management Service, giving us the ability to monitor decryptions and revoke and rotate keys regularly.
The Plandek cloud platform is multi-tenant and as a consequence, we devote considerable resources to ensuring that your data can only be accessed by your team. We employ a variety of technical measures to ensure this separation including fine-grained role-based permissions which are a key focus of our annual penetration tests. To date, our penetration tests have not found any weaknesses in our separation between customers.
Data storage
Storage locations
Plandek production databases are:
Elasticsearch hosted by Elastic in Google Cloud Belgium. This stores the indexed data, that is used to display the various metrics.
Postgres Databases also hosted in Google Cloud Belgium. These store customer and user related data, as well as event data.
Google CloudSQL also hosted in Google Cloud.
Data security and encryption
Since Elasticsearch, Postgres DB and CloudSQL are hosted in Google Cloud Storage, they are encrypted (AES-256) when written and so are encrypted at rest. More details can be found here.
Elastic has an exemplary security model. By default, Transport Layer Security (TLS 1.3) encrypted communication from the Internet is provided and clusters are deployed behind proxies that are not visible to Internet scanning. Access control checks for user authentication and authorisation. See more about Elastic’s security here.
Data access
Enforcing privacy between customers
The Plandek cloud platform is multi-tenant and as a consequence, we employ a variety of technical measures to ensure that your data can only be accessed by your team, including fine-grained role-based permissions and linkage of data to client ID.
To date, penetration tests have not found any weaknesses in our separation between customers.
Protection of your access credentials
All-access credentials are encrypted with Google Key Management Service, giving us the ability to monitor decryptions and revoke and rotate keys regularly.Examples of metadata which Plandek may store include information such as:
Authentication
Plandek supports authentication via Auth0, either with a username and password or via your own single sign-on service. Our authentication system is built using Auth0. You can read more about Auth0’s security here.
Plandek supports fine-grained role-based authentication.
Plandek employee access to customer data
Plandek strictly limits employee access to client data to only what is necessary to support the service or resolve incidents, always viewing the minimum data required. Access to customer data and production systems is restricted to essential, vetted employees based in the EEA, all of whom are bound by confidentiality obligations.
Employee systems access
All access to internal systems and tools is only possible through either a VPN or Google Cloud Identity Aware Proxy and all employee devices are encrypted. We also mandate strong passwords and 2FA for employee accounts.
Data disposal
Data will be deleted on request by a customer.
Customers will also be deleted from Plandek after a period of non-use. Around 6 months after the last login, they will be alerted by email that their account is due to be deleted. If there is no login after 7 days, then the account will be deleted.
Systems architecture overview
Gathering
Plandek’s gatherers run for each client and connect to the APIs exposed by the tools used by the client. They fetch raw data, which is stripped down to the minimal amount of metadata required to power the product. This metadata is then sent encrypted to Plandek’s API to be processed. This part of the system can run on-premise or in our cloud.
Plandek on-premise data gatherer option
Plandek offers clients the option of an on-premise data gatherer. This option ensures that sensitive data (received automatically via the connected APIs and not required by Plandek) are removed from the dataset before it is encrypted and exported – and therefore such data never leaves the client’s network.
The Plandek data collection process is separated into several stages to ensure that your source code does not leave your network and to put minimal strain on your services.
The on-premise gatherers are designed to run on a kubernetes cluster and are packaged as a docker container. This comprises a long-running orchestrating component which communicates with Plandek’s systems and launches short-lived gathering pods on the Kubernetes cluster.
Processing
Processing runs on Plandek’s servers and converts the metadata into events which are used to generate the metrics that power the Plandek Web Application.
Metrics
Metrics runs on Plandek’s servers and consumes the processed data from processing and generates metric data based on it.
Web application
Plandek’s web application is built in React, over a high performance Node.js backend. We conduct an external white box penetration test and security audit on our infrastructure and externally accessible services at least annually.
Application security
The Plandek platform is hosted on the Google Cloud Platform in Belgium. Plandek follows Google’s best practices for configuration and security. The Plandek infrastructure and software are audited and pentested on a regular basis and any issues that are identified are rapidly resolved. Read more about Google Cloud’s security here.
Only the Plandek web application is exposed on the internet, all other resources are firewalled and only accessible via Google Cloud Identity-Aware Proxy or our VPN. The Plandek web application is only available via HTTPS, protected by TLS (1.3).
Pentests and vulnerability scanning
Plandek carry out a number of tests to ensure security and to check for vulnerabilities:
Plandek undertakes a white box source code audit and penetration test annually. This covers our web applications, backend services and the underlying infrastructure. We also run automated vulnerability scans to identify risks and weaknesses in our systems on a monthly basis (and ad hoc, as new threats are found) using Intruder.
Snyk is used to scan containers.
Sonarcloud is used to scan code for the OWASP top 10 emerging threats.
Backups and disaster recovery
Plandek infrastructure covers multiple availability zones, and our databases are backed up and tested automatically daily.
Annual end-to-end disaster recovery tests are carried out in the first quarter of each year.
Security documentation
Security FAQs
Plandek holds all data in the Google Cloud Platform Belgium region. Metric data is stored in Elastic Cloud in the same Google Cloud region.
Data stored by gatherer
Details of data stored by gatherer including: Github/Bitbucket/Gitlab, JIRA, Harvest, Tempo, Forecast and CI services.
On-premise data gathering
An overview of the Plandek data collection process covering: gathering, processing, metrics, and how this is deployed and managed.
Information security summary
An overview of Plandek’s information security covering all the topics on this page.
Looking for sth else?
Pen test audit results, Information Security Policy, Infosec Risk Management Policy, and SOC 2 Report are available upon request.
See how your engineering efforts translate into measurable business impact
Measure delivery performance, AI impact, and engineering productivity with hundreds of metrics, OOTB dashboards and custom configurations.
Contact us
LONDON - HQ
Unit 313 The Print Rooms, 164-180
Union St, London SE1 0LH








