EstimateIntel
Document Version: 1.0  |  2026-03-28
Classification: Confidential

Technical Security & Architecture Brief

Prepared for Beck Building Company โ€” IT & Security Review
01Database Architecture

The system operates two entirely separate database instances. Your project data never touches the benchmark layer directly โ€” it passes through a one-way transformation that strips all identifiable information before any aggregation occurs.

๐Ÿ—„
Beck Private Database
Platform: Supabase PostgreSQL Security: Row-Level Security enforced Region: US-based (AWS us-east-1) Isolation: Separate instance โ€” not shared schema
Tenant-Isolated
Firewall
Transformation
Layer
One-way ยท
Irreversible
Strips all identifiers
Firewall
๐Ÿ“Š
Regional Benchmark Database
Platform: Supabase PostgreSQL Contents: Averages & ranges only Region: US-based (AWS us-east-1) Isolation: Separate instance from Private DB
Aggregate Only
No reverse path exists. Once data passes through the transformation layer, there is no technical mechanism to reconstruct the originating record from the benchmark database. The flow is architecturally one-directional.
02Field Inventory โ€” Both Databases

Private Database (Beck-only) โ€” These fields are stored in your isolated instance. No other company can access or query this data.

Field Name Description In Benchmark?
project_code Your internal project identifier Not Included
division_code CSI division reference (e.g., 03 Concrete, 05 Metals) Aggregated Only
subcategory_code AIA subcategory within a division (e.g., "09 30 00 Tiling") Aggregated Only
line_item_description Specific scope description (e.g., "Ceramic Floor Tile", "Concrete Slip Forming") Never Entered
estimate_amount Pre-construction cost estimate by division Not Included
actual_amount Final invoiced/reconciled cost by division Not Included
variance Delta between estimate and actual (dollar + percent) Not Included
sub_trade Subcontractor trade category Not Included
bid_amount Subcontractor bid amount (never enters benchmark) Never Entered
change_order_amount Change order totals per project/division Not Included
project_type Project category (e.g., luxury residential, commercial) Category Only
geography Project location (full address stored in private DB) Region Only
square_footage Project size metric Range Band Only
timeline Project duration in months Not Included
sub_performance_metrics Schedule adherence, completion rate by trade (NOT pricing) Not Included

Regional Benchmark Database โ€” Contains only aggregated statistical ranges. No row in this database corresponds to any single project or company.

Field Name Description Source
region Geographic region (e.g., Mountain West Colorado) Aggregated
project_type_category Broad project category โ€” not specific identifiers Aggregated
division_category CSI division grouping (trade category level) Aggregated
subcategory_category AIA subcategory grouping within a division. Benchmarks are produced at two granularity levels: trade category (available first) and line item (available later with more data). Aggregated
avg_cost_per_sqft Statistical average across contributing projects (see Section 03 for thresholds) Aggregated
labor_rate_range Published range band โ€” not company-specific Aggregated
material_cost_range Published range band โ€” not company-specific Aggregated
variance_band Typical estimate-to-actual variance range by division Aggregated
sample_size Number of contributing projects (always visible to user) Aggregated
Fields that never enter the benchmark under any circumstances: project_code, user_id, bid_id, vendor_name, address, owner_name, raw line_item_description text (specific descriptions from sub bids), unit_cost (specific pricing), raw m_bid_amount, raw m_actual_paid_to_date, or any specific dollar amounts tied to a single project. Sub pricing is specifically excluded โ€” only sub performance variance (schedule adherence, completion rate) is used for performance tracking.
03Aggregate Suppression Rules

These rules govern what data is allowed to appear in the benchmark layer. They exist to prevent statistical reverse-engineering of any contributor's data.

1
Two-level publication thresholds. Trade category benchmarks require a minimum of 10 contributing projects from 3 or more companies before publication. Line item benchmarks require a minimum of 10 contributing projects from 5 or more companies. Cells below these thresholds are suppressed entirely โ€” not rounded, not estimated, not shown.
2
Geographic aggregation at regional level. Data is currently aggregated at the Mountain West Colorado region level, not county or sub-region. County-level resolution becomes available only once the regional dataset supports county-level suppression thresholds independently.
3
Multi-year rolling averages โ€” minimum 2-year window. Point-in-time figures are not published. All figures represent a minimum 2-year rolling window to prevent year-specific identification.
4
Small-cell suppression โ€” hidden, not masked. If a filter combination produces a cell below the minimum threshold, that cell is removed from the output entirely. The system does not round, approximate, or display a warning value โ€” it disappears.
5
No combination of filters can narrow results below threshold. The system enforces threshold checks at the query level, not the display level. A user cannot chain filters (region + project type + division + date range) to isolate a cell that would otherwise be suppressed.
6
Sub pricing never enters the aggregate layer. Subcontractor bid pricing is stored in the private database only. The benchmark layer receives sub performance variance metrics (on-time rate, completion rate) only โ€” not dollar amounts.
7
Vendor performance aggregate thresholds. Vendor performance benchmarks require 5 or more contributing companies before publication. Vendor benchmarks are only visible to GCs who have used that vendor (reciprocity model). A GC cannot view aggregate performance data for a vendor they have no history with.
Statistical variance protection. All published aggregate values include built-in statistical variance protection. No external system can request or receive benchmark data โ€” benchmarks are rendered inside the application only, with no outbound API connections.
04Security Controls
Encryption at Rest
AES-256 applied at database and storage layer via Supabase/AWS. Encryption is managed-key at infrastructure level.
Encryption in Transit
TLS 1.3 enforced on all connections. Older protocol versions (TLS 1.0, TLS 1.1) are disabled.
Authentication
User ID-based via Supabase Auth. MFA is available and can be enforced as a policy for all Beck user accounts.
Row-Level Security
Database-level tenant isolation enforced via Supabase RLS policies. Beck's data is in a separate instance โ€” not a separate schema on a shared instance.
Access Control
Named individuals only. Access matrix provided in the mutual NDA. All access events are logged and auditable on request.
Session & Account Policies
Session timeout enforced after inactivity. Account lockout triggered after failed login attempts. Policies configurable per organization.
05Compliance Status

The following table reflects current, honest compliance posture. We do not overclaim certifications in progress.

Layer Description SOC 2 Status
AWS
Infrastructure
Amazon Web Services โ€” underlying compute, storage, and networking infrastructure for both database instances. SOC 2 Type II
Report available on request
Supabase
Database platform
Managed PostgreSQL database, authentication, and row-level security enforcement layer. SOC 2 Type II
Report available on request
EstimateIntel
Application layer
The software application, API layer, transformation pipeline, and benchmark engine built by EstimateIntel. 18-Month Roadmap
CAIQ self-assessment available now
Infrastructure is SOC 2 Type II certified today. The application layer (EstimateIntel software) is on an 18-month certification roadmap. For the pilot period, we provide a CAIQ (Consensus Assessment Initiative Questionnaire) self-assessment that maps our current controls to SOC 2 criteria. This document is available to Beck's IT team upon request.
06Data Flow: Sage โ†’ System
This is a one-way data handoff โ€” not an API integration. The system never connects to Sage, never requests credentials, and never establishes any live connection to Beck's accounting environment. There is no attack surface on Beck's accounting system.
1
Beck exports a report from Sage
Standard Sage export functionality โ€” no third-party plugin or integration required. Beck's accounting team exports using existing, native export tools they already use.
2
File is uploaded via secure transfer
Beck forwards the file through a secure upload portal. Transfer is encrypted via TLS 1.3. No email attachments, no FTP, no shared drives.
3
System ingests and maps to private database
The software application processes the file and maps fields to Beck's private database at the line item level โ€” Division > Subcategory > Description โ€” using a universal construction language based on AIA/CSI MasterFormat. For PDF documents, a secure API call extracts structured data from the document (the same type of document parsing used by Adobe, ABBYY, Amazon Textract, and Google Document AI). The original file is retained in the private environment only.
4
Intelligence is surfaced in the estimating interface
Beck's estimators see reconciled historical data alongside new estimates. The software surfaces patterns and variance analysis from Beck's own historical record. In addition to cost data, the system tracks RFIs (Requests for Information), change orders, and warranty callbacks โ€” all linked to specific bid line items. This enables analysis of which scope items and specifications generate the most downstream issues, including schedule impact and invisible general conditions costs absorbed by the GC.
Item Status
Live connection to Sage No
Sage credentials required No
Beck accounting network exposed No
Replaces Sage No โ€” runs parallel
Transfer encryption TLS 1.3
07Backup and Recovery
Control Detail
Automated backups Daily automated backups. All backup storage is US-based.
Point-in-time recovery 30-day retention window. Any point within the window can be restored.
Backup restoration testing Annual backup restoration test performed and documented.
Data export on request Full data export delivered in CSV or Excel format within 5 business days of written request.
08Service Level Agreements

Uptime target: 99.5% monthly. Calculated on a rolling 30-day window, excluding planned maintenance windows.

Priority Definition Acknowledgment Resolution Path
P1 โ€” Critical Full system outage or confirmed security breach 1 hour Status update: 4 hr
Resolution path: 24 hr
P2 โ€” High Partial feature loss or significant performance degradation 4 hours Resolution target: 48 hr
Planned Maintenance Scheduled upgrades and infrastructure changes 72-hour advance notice Scheduled outside Mountain Time business hours
A named security escalation contact at EstimateIntel is provided in the pilot agreement. Beck will have a direct, named point of contact โ€” not a support queue.
09Termination and Data Handling
Item Detail
Data export on termination All Beck project data exported in standard format (CSV/Excel) within 30 days of termination notice.
Deletion timeline All copies of Beck project data โ€” including backups โ€” deleted from all systems within 30 days of termination.
Deletion certification Written certification of deletion provided to Beck upon completion.
Benchmark data Regional benchmark data, which contains no identifiable information attributable to Beck, is retained as a platform asset. No action is required from Beck regarding this data โ€” it cannot be traced back to any specific project.
10FAQ for IT Team
Does our data feed a public AI model like ChatGPT?
No. The intelligence is coded directly into the software application by our software engineering team. All code is reviewed and approved by engineers before deployment. Your data does not go to ChatGPT, OpenAI, or any public AI service. The only external API call is for document parsing (extracting structured data from PDFs), which runs through Anthropic's commercial API. Anthropic's published policy states: "By default, we will not use your inputs or outputs from our commercial products (e.g. Anthropic API) to train our models." API data is retained for up to 7 days for safety monitoring, then permanently deleted. This is the same type of secure document parsing used by Adobe Acrobat, ABBYY, Amazon Textract, and Google Document AI. For comparison, Procore uses customer data to train its own internal AI models, and Sage uses customer data for "product research, development, and innovation." Our data handling policy is stricter than both.
Can we deploy this on our own servers?
The system is designed as a managed cloud application. On-premise deployment is not currently offered. The cloud architecture is what enables the benchmark layer to function and the infrastructure certifications (AWS SOC 2 Type II, Supabase SOC 2 Type II) to apply.
What about multi-factor authentication?
MFA is available through Supabase Auth and can be enforced as a mandatory policy for all Beck user accounts. If Beck's security policy requires MFA for all users, this can be configured at the organization level before any Beck user is provisioned.
Who at EstimateIntel can access our data?
During the initial five-project analysis: Garry Dubbs (Founder), Garry Dubbs Sr. (Project Oversight), Patricia Dubbs (Data Normalization), and Skip Cox (Domain Expert โ€” initial structuring only). Skip Cox's access is limited to the build phase and terminates after Phase 1. Ongoing access after Phase 1: Garry Dubbs, Garry Dubbs Sr., Patricia Dubbs only. Steve Grandchamp (Strategic Advisor) has no data access at any phase. All access is logged at the database level, auditable, and documented in the mutual NDA.
What happens if EstimateIntel ceases operations?
Beck receives a full export of all project data and a written deletion confirmation within 30 days, per the pilot agreement. The export format is standard CSV/Excel โ€” readable without any EstimateIntel software. Beck retains full ownership of all data contributed to the system throughout the engagement.
How is sub pricing protected?
Subcontractor bid pricing never enters the benchmark layer under any circumstances. The system tracks sub performance variance (schedule adherence, on-time completion rates) for internal Beck use only โ€” these metrics are stored in Beck's private database and do not feed any aggregate output accessible to other users. Vendor performance aggregate data is only visible to GCs who have used that vendor, and only when 5 or more companies contribute data on that vendor. Raw dollar amounts never enter the benchmark layer โ€” only derived ratios (overrun percentages, cost per square foot averages).
Can someone reverse-engineer our data from the benchmark output?
The suppression rules described in Section 03 are specifically designed to prevent this. Trade category benchmarks require a minimum of 10 projects from 3+ companies; line item benchmarks require 10 projects from 5+ companies. Cells below threshold are hidden entirely โ€” not rounded or masked. No filter combination can narrow below the threshold. These rules are enforced at the query layer, not the display layer.