AI Transparency & Ethics
Version 1.0 — October 2025
Our Purpose
Fibric exists to make the physical world intelligent — safely, transparently, and on human terms. We build AI systems that understand and act within buildings, campuses, and cities, helping people and organizations operate their spaces with clarity, control, and trust.
We believe intelligence in the physical world should feel as natural as turning on a light: it should serve you, not watch you; simplify your environment, not complicate it; and never obscure how it works or what it’s doing.
1. Principles that Guide Us
PrincipleWhat It Means at Fibric
Transparency
Every action taken by Fibric’s AI systems must be explainable and traceable. Building operators can always see what the system knows, how it made a decision, and when it acted.
Accountability
We hold ourselves and our systems accountable for outcomes. AI agents log their decisions, reference sources, and operate under explicit permission boundaries defined by users and building administrators.
Privacy & Data Stewardship
Fibric does not sell user data. Data collected from buildings — temperature, occupancy, access events, energy use — belongs to the customer. Our models use anonymized, aggregated data to learn patterns that improve performance while preserving privacy.
Human Oversight
AI agents act with human context, not in place of it. Operators can approve, modify, or roll back actions. Every automated workflow is observable and reversible.
Security by Design
From device to cloud, every layer of Fibric’s platform follows zero-trust architecture. Access control, encryption, and tenant isolation are core to our design, not optional add-ons.
Fairness & Inclusion
Our systems are designed to perform equally across property types, geographies, and occupant patterns. We actively test against data bias in occupancy detection, energy optimization, and predictive maintenance models.
Sustainability
Intelligence should reduce environmental impact. Every Fibric deployment is measured by its ability to lower waste — energy, time, and maintenance — without compromising comfort or safety.
2. Transparency in Practice
We treat transparency as a product feature, not a press release.
1. Explainability Dashboard
Every building has access to an AI activity log that shows:
- What data the AI analyzed (temperature, access, weather, etc.)
- What decision or recommendation was made
- Which Fibric skills or models were involved
- Whether human review or approval was required
2. Visible Permissions and Boundaries
Fibric’s AI agents run with defined scopes — they can only access the systems and data sources you authorize. For example, a “comfort optimization” agent can adjust HVAC and lighting, but not access door locks or cameras unless explicitly granted.
3. Data Provenance
All model training and inference pipelines include metadata tracing — so customers can see what categories of data influenced a model, when it was last updated, and which organization owns the data.
4. Anonymization and Aggregation
Before data leaves a local environment, it’s stripped of identifiers like room numbers, guest IDs, or device serials. Aggregated data is used for pattern learning across similar property types (e.g., hotels in similar climates).
5. Open Reporting
Fibric publishes periodic “Transparency Notes” summarizing model updates, known limitations, and the categories of data used. We also disclose any integration partners that access Fibric data under customer consent.
3. Ethical Use of Building Data
Data Ownership
The data generated by a building belongs to the building owner or operator. Fibric acts as a processor and steward, not an owner. Customers can export or delete their data at any time.
Consent and Control
Any integration with third-party systems requires explicit user consent and clear display of what data will flow between systems.
Human Privacy
Fibric never uses video or audio data for biometric profiling, identity recognition, or behavioral prediction. If occupancy sensors or microphones are present in a building, they are used solely for context (e.g., sound level to detect crowding) — never to identify individuals.
Model Ethics and Retraining
We continuously evaluate models for unintended consequences such as unfair heating/cooling distribution, false occupancy signals, or access anomalies that could disadvantage specific users. Retraining cycles include fairness and stability checks before deployment.
4. Safety, Reliability, and Human-in-the-Loop
- Fail-Safe Operations — All automation includes a fallback to manual control. If a model or integration fails, building systems revert to their native configurations.
- Simulation and Testing — Before an AI agent is allowed to act on live systems, it is tested in simulated environments using real-world data.
- Auditability — Every decision made by an AI agent is timestamped and stored in an immutable event ledger for compliance and diagnostics.
- Human Override — Operators can always override AI actions in real time through the Fibric interface or API.
- Continuous Monitoring — System health and performance are monitored 24/7, with anomaly detection that prioritizes safety before optimization.
5. Model and Data Governance
Fibric’s AI architecture follows a “Responsible Chain of Intelligence”:
- Data Collection Layer — collects signals from sensors and APIs under user consent.
- Processing Layer — anonymizes, normalizes, and filters sensitive data.
- Model Layer — uses only pre-approved datasets and reproducible pipelines.
- Decision Layer — applies contextual constraints (policy, comfort, safety).
- Action Layer — executes commands only through verified integrations with built-in rollback support.
Each layer has its own audit, logging, and security policies. No agent can bypass these.
6. Partner and Developer Standards
Fibric’s ecosystem is open — developers can create custom skills and integrations. To maintain trust, all developers must:
- Declare what data their skill uses and how it’s processed.
- Undergo a privacy and security review for any integration that reads or writes building data.
- Agree to Fibric’s Responsible AI Developer Agreement, which prohibits surveillance, discrimination, or non-consensual data sharing.
- Provide clear documentation and user-visible descriptions for every skill, including data inputs and outputs.
7. Governance and Oversight
- AI Ethics & Transparency Board — composed of internal and external advisors in data ethics, privacy law, and building automation. They review high-impact changes to models and data policy.
- Incident Disclosure — Any incident involving unintended data exposure or model malfunction is reported to affected customers within 48 hours.
- Third-Party Audits — Fibric undergoes independent security and model-integrity reviews at least annually.
- User Feedback Channel — Anyone can report concerns about Fibric AI behavior through the in-app “Transparency” section or at ethics@fibric.io.
8. Our Promise
AI that touches the physical world has to earn trust every day. At Fibric, we measure success not only by how intelligent our systems become, but by how transparent, reversible, and accountable they remain.
We don’t believe in opaque automation. We believe in visible intelligence — the kind you can see, question, and control.
Fibric, Inc.
AI Agents for the Physical World
www.fibric.io