Security
Have I Been Squatted is a security product. We help organizations detect domain squatting, typosquatting, and brand impersonation, so our customers trust us with sensitive data about their brand assets and their users. That trust demands that we hold ourselves to a high security standard.
This page describes our security posture in detail. How we classify the data we hold, the technical controls we have in place, how we manage access, how we respond to incidents, and who we share data with. If you are a security team evaluating us as a vendor, this page is written for you. If you have a question not answered here, email [email protected].
Data we retain#
Before describing how we protect data, it helps to be precise about what data we actually retain. We distinguish between the following categories.
| Category | Description | Examples |
|---|---|---|
| Account data | Information provided directly by you when creating and managing your account. | Name, email address, billing address, password hash (managed by Clerk) |
| Payment data | Billing and subscription information. We do not store raw card data; payment card processing is handled end-to-end by Stripe. | Subscription plan, invoice history, last-four digits of card (stored by Stripe) |
| Query & scan data | Domains, keywords, and brand terms you submit for monitoring. This is the core of what the product does. | Monitored domains, scan results, detected squats, alert configurations |
| Technical & usage data | Data we collect automatically as part of operating the service. Most of this is not personal information; some of it may be depending on jurisdiction. | IP addresses, browser and OS versions, session identifiers, page views, API request logs, error traces |
Infrastructure#
Cloud provider#
Our primary infrastructure runs on Amazon Web Services (AWS). Compute, storage, and our data pipeline all live within AWS. We use AWS services including S3 (object storage), DynamoDB (NoSQL database), Lambda (serverless compute), SQS (queuing), and Bedrock (managed AI inference). Our time-series and relational data is stored in Tiger Cloud (TigerData, formerly Timescale). Our front-end application is deployed on Vercel (hosting with global edge distribution).
Network edge#
We use Cloudflare and AWS CloudFront at the edge. Web traffic passes through Cloudflare before reaching our origin. CloudFront serves API workloads and other backend traffic. Cloudflare provides:
- DDoS mitigation Volumetric and application-layer attacks are absorbed at the edge before reaching our servers
- Web Application Firewall (WAF) Malicious request patterns are blocked at the edge
- DNS Our authoritative DNS is managed by Cloudflare
- CDN Static assets are cached globally via Cloudflare and CloudFront to reduce latency and origin load
Data residency#
All customer data is stored and processed in the United States. We do not replicate customer data to regions outside the US. CDN traffic (cached assets, request routing) is processed at edge locations globally; persistent storage and application processing remain in the US.
Encryption#
| Layer | Standard |
|---|---|
| Data in transit | TLS 1.2 minimum; TLS 1.3 preferred. Enforced on all endpoints including API, dashboard, and webhooks. |
| Data at rest | AES-256. Applied across all S3 buckets, DynamoDB tables, and persistent volumes. |
| Backups | Encrypted using the same AES-256 standard before being written to storage. |
HTTP requests are redirected to HTTPS. We use HSTS with a long max-age to prevent protocol downgrade attacks.
Backups & recovery#
- Automated daily backups of all primary datastores
- Point-in-time recovery (PITR) enabled on DynamoDB tables and Tiger Cloud; we can restore to any second within the retention window
- Backup integrity is tested periodically via restore drills
- Recovery time objective (RTO) and recovery point objective (RPO) are reviewed as part of our incident response planning
Application security#
Authentication#
Authentication is provided by Clerk. We do not implement our own authentication system; Clerk handles credential storage, session management, and token issuance. This means:
- Passwords are never stored in our own database. Clerk stores and validates credentials on our behalf
- Multi-factor authentication (MFA), including phishing-resistant MFA, is supported
- Single sign-on (SSO) is supported as standard for users authenticating via Microsot, Google, and GitHub authentication providers. SAML and OIDC/EASIE authentication methods are available for Enterprise customers
- Session tokens are short-lived and rotated on sensitive operations
- Brute-force protection and account lockout are handled by Clerk's built-in controls
Authorization#
Access within the platform is controlled by role-based access control (RBAC). Different roles carry different permission sets. Team members can be granted read-only access to scan results without access to billing or account settings. We enforce authorization checks server-side on every API request; client-side state is never the sole control.
Secure development#
- All code changes go through peer review before merging.
- Infrastructure is defined and managed as infrastructure as code (Terraform), enabling version control, review, and reproducible deployments.
- Automated dependency scanning is run on every pull request and on a scheduled basis to catch known vulnerabilities in third-party packages.
- Security patches for critical and high-severity CVEs are applied as a priority within our normal release cycle.
- Principle of least privilege is followed when introducing new dependencies and cloud permissions; services are granted only the permissions they need to function.
- Secrets and credentials are stored securley (both at rest and in-transit) and are never committed to source control.
API security#
- All API endpoints require authentication except for explicitly public endpoints.
- Rate limiting is applied at the edge (AWS CloudFront) and at the application layer to prevent abuse.
- Input validation is applied at every API boundary. We do not pass unsanitized user input to downstream services.
- Webhook payloads are signed; recipients should verify signatures before processing.
Access controls#
Internal access to customer data#
Access to production systems and customer data is restricted to employees who require it for their role. We apply the principle of least privilege throughout:
- All access to production infrastructure is logged and auditable.
- We use separate accounts and roles for development and production environments to prevent accidental cross-environment access.
- Administrative access to cloud infrastructure requires phishing-resistant MFA.
Third-party access#
Our sub-processors (listed at the bottom of this page) access data only to the extent necessary to provide their services to us. Each operates under a Data Processing Agreement (DPA). We do not grant sub-processors broad or standing access to our production datasets; integrations are scoped to the minimum required.
Offboarding#
When an employee or contractor leaves, access to all internal systems and third-party services is revoked promptly as part of a standard offboarding checklist. Credentials are rotated for any systems where individual credentials cannot be cleanly revoked.
Monitoring & alerting#
We monitor the health and security of the platform continuously.
- Grafana covers infrastructure and frontend observability, including dashboards, error rate alerting, latency spikes, and anomalous request patterns.
- AWS CloudWatch provides infrastructure-level metrics and log aggregation for all Lambda functions and AWS services.
- Alerts are routed to on-call staff via Slack and pager integrations. Critical alerts have defined escalation paths.
Customer notification#
If a security incident results in unauthorized access to your data, we will notify affected customers promptly and in accordance with our obligations under GDPR, CCPA, and other applicable laws. Notifications will include a description of what happened, what data was affected, and what steps we are taking.
Business continuity#
- Critical services are deployed across multiple AWS Availability Zones to tolerate single-AZ failures without service interruption.
- Cloudflare and AWS CloudFront's global edge network provides resilience for traffic ingestion even if an origin region degrades.
- Automated backups and PITR allow us to recover data to a recent known-good state in the event of data corruption or loss.
- We test recovery procedures periodically to validate that our RTO and RPO targets are achievable in practice.
Responsible disclosure#
We believe that working with security researchers makes the internet safer for everyone. If you believe you have found a security vulnerability in our platform, please let us know responsibly.
Email [email protected] with a clear description of the issue, steps to reproduce, and the potential impact. We aim to:
- Acknowledge your report within 2 business days.
- Provide an initial triage assessment within 5 business days.
- Keep you informed as we investigate and remediate.
We ask that you do not publicly disclose the issue until we have had a reasonable opportunity to address it. We will not take legal action against researchers who report vulnerabilities in good faith and follow these guidelines.
We do not currently operate a public bug bounty program, but we do appreciate responsible disclosure and will credit researchers who contribute to our security (with their permission).
Sub-processors#
We use the following third-party sub-processors to deliver the service. Each sub-processor may process personal data on our behalf as described below, and each operates under a Data Processing Agreement (DPA) with us.
| Provider | Category | Data processed | DPA |
|---|---|---|---|
| Clerk | Identity & authentication | Names, emails, auth tokens, user profiles | DPA |
| Stripe | Payments | Names, emails, addresses, payment card data | DPA |
| AWS | Infrastructure (compute & storage) | Serverless compute (Lambda), AI inference (Bedrock), and object storage (S3) for artifacts and generated assets. Minimal PII; primarily scan artifacts and operational data. | DPA |
| TigerData (Tiger Cloud) | Database (time-series) | Monitored domains, scan results, lookup data, alerts, and takedown requests. User-scoped metadata; may include contact emails. | DPA |
| Cloudflare | Infrastructure & CDN | IP addresses, HTTP traffic metadata | DPA |
| Vercel | Infrastructure & hosting | IP addresses, request logs, edge function data | DPA |
| PostHog | Product analytics | User behavior, session data, IP addresses | DPA |
| Grafana | Observability | Frontend errors, session traces, IP addresses | DPA |
| Intercom | Customer messaging | Names, emails, in-app messages, user metadata | DPA |
| Mailgun (Sinch) | Transactional email | Email addresses, message content | DPA |
| OpenAI | AI / ML inference | Query content (may include user-submitted data) | DPA |
| Slack | Internal notifications | Alert content (may include user identifiers in notification payloads) | DPA |
This list is reviewed and updated when we add or change sub-processors. If you have questions about a specific sub-processor or require a copy of a DPA, contact [email protected].
Questions#
If you have a security question not answered by this page, want to request a copy of a specific DPA, or need to complete a vendor security assessment, contact us at [email protected].
For privacy-related questions, see our Privacy Policy.