How We Protect Your Data and Systems
Every system we touch belongs to you. We operate on your infrastructure, under your accounts, with access scoped to what the engagement requires. Here is exactly how we handle security, data protection, and incident response across every engagement.
Our Security Philosophy
Your Infrastructure. Your Control.
We are not a black box. When we work with you, we operate inside your environment, on your cloud accounts, under your domain. We do not pull your codebase onto our laptops and disappear. From day one, every deliverable, every deployment, every environment lives on your infrastructure. You own it all.
The safest way to protect client data is to never take custody of it in the first place.
In rare cases (proof-of-concept work or milestone-based product builds), we may use our own infrastructure temporarily. But for consulting, fractional leadership, and staff augmentation engagements, we operate entirely within your systems as an extension of your team.
Least Privilege. Named Accounts. Clean Offboarding.
We follow the principle of least privilege on every engagement.
Named Professional Accounts
Every team member gets an individual, named account on your platforms. No shared credentials. No generic "contractor" logins.
Multi-Factor Authentication
If your systems support MFA, we enforce it. For platforms that support SSO, we use your identity provider. This is not optional for our team.
Minimal Team Footprint
Three senior operators. Access on a need-to-know basis. If a client requests additional support from another partner, access is provisioned through a single point of control.
Clean Offboarding
When an engagement ends, access gets revoked. No lingering admin accounts. No forgotten SSH keys. Only what ongoing scope requires.
No Secrets in Code. Ever.
Credentials, API keys, tokens, and environment variables never touch a Git repository in plaintext. This is a hard rule.
Environment configuration is separated from application code. Secrets are injected at runtime through the platform's native secrets management - not stored in configuration files, not committed to version control, not copy-pasted into Slack channels. The tooling matters less than the discipline.
Code Reviews, Dependency Hygiene, and CI/CD from Day Zero
Security is baked into the development process, not bolted on afterward.
Mandatory Human Code Review
Every merge request requires a human sign-off. We may use AI-assisted review tools to speed up the process, but a human makes the final call. Some clients restrict AI tool usage for proprietary code. We respect that without exception.
Automated Security Scanning
We integrate tools like SonarQube and Dependabot into every project. Dependency vulnerabilities get flagged automatically. This matters especially in the Node.js/NPM ecosystem, where package security issues are frequent and well-documented.
CI/CD Pipelines from Day Zero
Automated build, test, and deployment pipelines are the first thing we set up on any project. They reduce human error, enforce consistency, and create an auditable deployment history.
Version Control Discipline
Git (GitHub preferred) is the backbone of every engagement. Branching strategies, commit hygiene, and protected main branches are standard practice.
Your Data Stays Yours
We do not use client data for anything other than the work you hired us to do.
- Non-disclosure agreements signed as standard practice for every engagement
- No client data used for training AI models, marketing materials, or internal purposes
- Client logos used only with permission - clients can opt out at any time
- Case studies published only with explicit client approval or fully anonymized
- Clean data removal when engagement ends with no ongoing maintenance agreement
- GDPR compliance including lawful basis for processing and data subject rights
Everything Lives on Your Cloud
The client must own every deliverable from day one. Code repositories, cloud infrastructure, deployment pipelines, monitoring dashboards. All of it lives under your account, your billing, your control.
If we part ways tomorrow, you lose nothing.
Our cloud infrastructure partner holds these certifications:
AWS Solutions Architect
SysOps Administrator
Developer Certified
Hands-on expertise in VPC architecture, IAM policy configuration with least-privilege principles, Security Hub and GuardDuty monitoring, and multi-region disaster recovery strategies.
Backups That Actually Work
Having backups is table stakes. Having backups that restore correctly under pressure is what matters.
Automated Scheduled Backups
Defined retention policies with automated, scheduled backup runs.
Regular Restore Testing
A backup that has never been tested is not a backup. We verify them.
Multi-Region Redundancy
Where the engagement scope requires it, data is replicated across regions.
Infrastructure as Code
Terraform-driven environments that can be rebuilt from scratch if needed.
When Something Goes Wrong
If a security incident occurs on a system we manage or have access to, this is how we respond.
- 1
Immediate Notification
The client is notified as soon as we become aware of the incident. No delays, no internal committees first. You hear about it immediately.
- 2
Containment
We isolate the affected systems to prevent further exposure. This takes priority over root cause analysis.
- 3
Investigation & Remediation
Once contained, we conduct a thorough investigation to understand what happened, what was affected, and how to prevent recurrence.
- 4
Post-Mortem
Every incident gets a written post-mortem documenting the timeline, root cause, impact, and specific preventive measures implemented.
GDPR compliance: Notification to the supervisory authority within 72 hours is the legal requirement. Our commitment is faster - you know the moment we know.
AI Is Powerful. It Also Introduces New Risk.
When building AI-powered features or integrating LLMs into workflows, we apply security practices specific to this domain.
Enterprise-Grade LLM Deployments
For clients with sensitive data, we deploy models on enterprise platforms (Azure OpenAI Service, Google Cloud Vertex AI) where the provider guarantees your data is not used for model training.
No Data Leakage Through Dev Tools
When a client's codebase is sensitive, we restrict AI-powered development tools that would send source code to third-party APIs. The client decides.
Prompt Sanitization
User inputs flowing into LLM prompts are treated as untrusted data. We structure prompts so attached data cannot be interpreted as instructions.
Data Residency Compliance
If your users are in Europe, their data stays in Europe. We ensure the full data pipeline remains within the geographic boundaries your compliance requires.
Honest About Where We Stand
We do not currently hold SOC 2 Type II or ISO 27001 certification as a firm. We are a three-partner boutique, not a 500-person consultancy.
What we do have is 20+21+14 years of experience operating within environments that hold those certifications. Our team has built and maintained software under ISO 13485, EU MDR (Class IIb), FDA 21 CFR Part 820, and ISO 27001 frameworks.
We know what those standards require because we have implemented the engineering practices that satisfy them.
If your engagement requires formal certification from your vendors, we will tell you upfront whether we can meet that bar. We would rather lose an engagement than overstate our compliance posture.
Ready to Talk?
Ready to level up your technology leadership?
Let's discuss how fractional CTO leadership, AI product development, security hardening, staff augmentation, or screening support can help your company move faster, build safer, and hire smarter.