Shipping More by Writing Less: How Declarative Config Changed Our Workflow
We turned insight delivery into a config-driven workflow. Analysts define what they need; the system handles the rest. Fast, safe, and production-grade - without involving engineers.

In many organizations, delivering analytics and operational visibility is gated by engineering cycles. A PM or analyst might ask for a new metric or insight, and the next steps are painfully familiar: open a ticket, wait for a sprint, write backend code, review the PR, deploy it, validate it, and finally expose it in a dashboard. By the time it reaches production, the question that prompted the insight might already be irrelevant.
We wanted to challenge that entire workflow.
At Island, we aggregate a wealth of telemetry signals from our browser platform - like user analytics, device security posture, identity metadata, installed extensions, and more - while respecting tenant-level isolation and privacy constraints. From all this data, we deliver insights that help admins monitor device behavior, spot anomalies, and quickly optimize their work environment.
But we couldn’t afford to treat each new insight like a feature. That would mean writing code, scheduling engineering time, going through multiple review and deployment cycles-each step adding delay and friction. In fast-moving environments like ours, that model just doesn't scale. We needed a way to turn questions into answers without treating every one like a software release.
So, we built a framework that flips the script. What if:
- No code had to be written
- No deploys were required
- No developers were involved
And yet, the result was a live, production-grade insight available for customers in the Management Console - powered by scheduled execution and historical trend data.
That’s exactly what we built: a self-service, config-driven insight engine that enables analysts to ship new insights in under an hour - no backend involvement needed. The system is built for scalability from the ground up, supporting multi-tenant execution, global distribution, and safe rollout at scale.
What Is an Insight?
An Insight is a self-contained analytic unit that:
- Detects trends, anomalies, or patterns - for example, an insight like "Privileged Users Accessing AI Tools" helps security teams identify whether high-permission users are engaging with unauthorized or risky generative AI services. This can prompt policy reviews, user outreach, or tighter controls before it escalates to a compliance issue.
- Is defined by a JSON file for UI metadata
- Is paired with an SQL query to extract data
- Supports feature flags, versioning, and real-time or batch execution
- Appears automatically in the UI with drill-down capability
- Executes safely across tenants in parallel
- Persists results for historical tracking and visualization
These insights become live within minutes after validation. They are dynamically rendered by the UI and executed via our backend logic with tenant-aware, service-aware routing.

How It Works
Let’s walk through how we define and deliver an insight in our system.
JSON Configuration
{
"id": "ai_tool_access_by_privileged_users",
"title": "Privileged Users Accessing AI Tools",
"description": "Indicates access to AI platforms like ChatGPT, Bard, and Copilot by Admin or IT users.",
"entityType": "User",
"source": "UserBrowserEvents",
"query": "ai_tool_access_by_privileged_users.sql",
"category": "Security",
"severity": "Medium"
}
This metadata determines how the UI should render the insight, who should see it (via feature flags), and which backend service should execute it.
SQL Logic
SELECT
user_id,
role,
COUNT(DISTINCT domain) AS ai_tool_count,
MAX(access_time) AS last_access
FROM
user_browser_events
WHERE
role IN ('Admin', 'IT')
AND domain IN (
'chat.openai.com',
'bard.google.com',
'copilot.microsoft.com',
'claude.ai',
'huggingface.co'
)
GROUP BY user_id, display_name, role
HAVING ai_tool_count >= 2
ORDER BY last_access DESC;
Analysts own this logic. Once paired with the JSON file, they commit it to Git.
CI & Rollout
- Files are validated against schemas and business rules in CI.
- SQL is linted and verified against reference data.
- Validated files are versioned and uploaded to a global S3 bucket.
- The insight becomes instantly available across regions. Behind the scenes, it's automatically associated with a feature flag using a standard naming convention:
toggle-<insight-name>-insight
. This convention means there's no need to explicitly configure the flag when adding a new insight-just follow the pattern, and the system takes care of the rest.
That’s it - no backend code changes, no PR approvals, no deployments.
Insight Execution Architecture
To make this architecture easy to understand and communicate, we rely on a simplified flow diagram. It illustrates the end-to-end journey of how insights are defined, distributed, executed, and stored - from a single JSON file to a fully visible insight in the UI, backed by historical data in the database.

We support two distinct execution flows to cover both scheduled and on-demand usage.
Scheduled Batch Execution
A periodic job reads the latest insight configurations from S3. For each active and provisioned tenant, it creates a task and sends it to an SQS queue:
{
"tenantId": "acme-corp",
"insightId": "no_av_software"
}
Worker nodes poll from the SQS queue and run insight execution:
public async Task ExecuteInsightAsync(string tenantId, InsightDefinition insight)
{
var handler = _insightHandlerFactory.Get(insight.DataSource);
var result = await handler.ExecuteAsync(insight);
await _insightsMetricsRepository.Create(new InsightsMetricsEntity()
{
SnapshotTime = DateTime.Now,
InsightName = insight.Name,
Count = result
}); // Task<InsightsMetricsEntity>
}
Each insight is executed in a tenant-isolated context, with timeout, logging, and retries built-in.
Real-Time Execution
When a user interacts with the UI, a real-time insight request is sent to the backend:
GET /api/insights/ai-tool-access-by-privileged-user&limit=20
We route the query based on the dataSource
, scope the result set for performance, and return fresh results on demand.
Key Technical Decisions
Managing Database Load
To support scalable insight execution without compromising our core systems, we had to be intentional about where and how we ran analytical queries.
All analytical queries in our insights system are routed to read replicas of our production database. This architectural choice is critical for protecting the performance and stability of our transactional systems.
Here’s how it works:
- Separation of concerns: Write traffic (user interactions, configurations, etc.) is isolated from intensive read-only insight queries
- Dedicated compute for analytics: Read replicas are provisioned with compute optimized for long-running or complex analytical queries, including aggregations and joins
- Consistency trade-offs: We accept eventual consistency (a few seconds delay) as a trade-off in exchange for the ability to run heavier queries without interfering with real-time business logic
In addition to offloading reads to replicas, we apply safety mechanisms:
- All queries are tenant-scoped. This is enforced using a data-access layer mechanism that ensures each query’s scope is limited to a single tenant - preventing any human error in this critical flow
- We enforce row limits (LIMIT 100, etc.)
- Query timeouts and retries are in place
- For extremely heavy insights, we redirect to Snowflake, where we either query from pre-aggregated summary tables or use dynamic tables that automatically refresh and materialize insight-specific logic
This setup allows us to scale insight usage across thousands of tenants without degrading the performance of our core platform.
Query Duration and Stability
- We enforce a max execution time per job
- Insights are chunked and processed incrementally
- Results are streamed into AWS Firehose for durability
Supporting On-Demand Drilldowns
- Real-time drill-downs are lazily evaluated - detailed data is only queried when requested by users, minimizing unnecessary computation and system load
- Support pagination, filters, and sorting via SQL wrappers
Global Multi-Region Support
- All configurations live in S3 and are available globally
- Each cloud region executes jobs for its assigned tenants
- Insight result tables are stored in Snowflake by region and partitioned by date
Gradual/Selective rollout
- Insights are gated via customer-segment feature flags
- Flags allow A/B testing and partial/gradual rollout
- Disabled insights are skipped in batch runs and hidden in UI
Trend Persistence and Retrospective Analysis
It was important to us that we be able to track the output of each insight execution over time - both for transparency and to enable retrospective analysis. So, every insight execution stores its output in Snowflake for longitudinal tracking:
CREATE TABLE insights_results (
tenant STRING,
insight STRING,
matched_count INT
)
Based on this data, we provide:
- Time-series charts
- Change detection alerts
UI Rendering Logic
On the frontend, our insights UI dynamically renders based on the configuration provided by the backend. The entity type declared in the insight JSON config drives both the icon used and the drill-down behavior, etc.
Here’s a simplified example of how we use this in our React UI:
const ENTITY_CONFIG = {
Device: {
icon: <DeviceIcon />, // custom React component
drillDownUrl: (id) => `/devices/${id}`
},
User: {
icon: <UserIcon />,
drillDownUrl: (id) => `/users/${id}`
},
Extension: {
icon: <ExtensionIcon />,
drillDownUrl: (id) => `/extensions/${id}`
}
};
function InsightRow({ insight }) {
const config = ENTITY_CONFIG[insight.entityType];
return (
<tr>
<td>{config.icon}</td>
<td>{insight.title}</td>
<td>
<a href={config.drillDownUrl(insight.entityId)}>
View Details
</a>
</td>
</tr>
);
}
This design allows the UI to remain schema driven, adapting automatically as new entity types and insights are introduced - no redeploy required.
Developer-Focused Benefits
This architecture wasn’t built just for analysts. It also significantly improves developer experience:
- No PRs and code reviews - just minor config reviews analytical changes
- Clear separation of concern between product logic and analytics
- Easier debugging with tenant-specific trace IDs and DLQs (Dead Letter Queue)
With config-based insights, devs can focus on core systems, not wiring up dashboards.
Why This Changes the Game
- No code deployments: Insights are added with config files
- Fast time-to-value: Analysts can ship insights in few minutes
- Real-time + batch: Hybrid execution path covers all needs
- Scalable by design: Event-driven, multi-tenant, and resilient
- Production-ready: Includes retries, metrics, and observability
- Autonomous teams: No need to involve developers for each insight
Final Thoughts
We turned observability into a product.
By rethinking the boundary between dev and analytics, we created a system that enables anyone to ship meaningful insights in minutes without risk or friction.
The payoff? Faster decisions. Happier developers. Better data for our users.
More importantly, we noticed a pattern that worked: static configuration as a control plane. Instead of embedding logic in code, we define behavior in a structured configuration that flows through validation, execution, and UI-without manual handoffs or deployment cycles.
This approach, which decouples logic from implementation, has already proven useful in other areas of our platform: custom dashboards, onboarding flows, and dynamic policy builders. The pattern scales across teams, use cases, and time.
We're continuing to improve the system by adding schema validation, config linting, real-time alerts, and dynamic routing. But at its core, it's still structured configuration, SQL, and CI - simple tools that enable powerful outcomes. So If you're tired of the old way of delivering analytics, this is your invitation to rethink what’s possible.
Simple tools. Scalable system. Enduring architecture.