Case Study
Case Study
Case Study

Signal | Human Risk Intelligence

Designed a security incident investigation screen that puts the human behind the alert at the center, giving analysts the context they need to make better decisions, faster.

Role: Product Design · Self-initiated

Deliverables: Single-screen deep dive · Interactive prototype · Case study

Focus: B2B Enterprise UX · Dense data interfaces · Human-centered security

Process: Used Claude artifacts for rapid HTML prototyping of dense data layouts, testing information hierarchy in working code before high-fidelity execution in Figma. Tools generate options; judgment selects, refines, and ships.

Security teams don't lack data. They lack the story behind it.

When a data exposure incident is detected, existing tools show a file name, a severity level, and a policy violation. What they don't show is who the person is, whether this was a mistake or intentional, and what the right response actually looks like. Analysts are expected to make consequential decisions about real people, with almost no human context. Drawn from public research on alert fatigue and SOC analyst workflows (CISA, Gartner)

Core Insight

The gap isn't in detection. It's in interpretation. An alert that says "HIGH RISK" without explaining why, or who, forces analysts to guess. Signal replaces the guess with a structured human profile, a behavioral timeline, and an intent model that makes the story legible before any action is taken.

Design Decisions & Trade-off

Decision

Spectrum, not binary

The intent gauge shows a confidence percentage rather than a binary label. Reality is a spectrum. A 30% deliberate score requires a different response than 80%. Showing the number forces precision.

Trade-off

Transparent signals vs. cognitive load

Showing the individual signals that built the score adds complexity. But hiding them would make the system feel like a black box. Analysts need to be able to disagree with the model, so we showed the reasoning.

Trade-off

Action weight and consequence

Notify manager and Escalate to Insider Threat are both recommended, but one is reversible and low-stakes, the other is serious and permanent. The design had to communicate that difference before the analyst clicks. Solution: Solution: irreversible actions like 'Escalate to Insider Threat' are excluded from 'Run all recommended', marked with a distinct red treatment, and require explicit two-step confirmation

Why Real-Code Prototyping

Dense data interfaces are hard to evaluate in static mockups. Real numbers, real proportions, real overflow behavior reveal problems that Figma frames hide. I prototyped the dashboard in HTML/React with Claude artifacts to test the actual information density before committing to a layout. The intent gauge, the weighted signals list, and the timeline all went through 3-4 functional iterations in code before any high-fidelity work in Figma. AI accelerated the cycle; the design choices remained mine.

The Screen

One incident. One analyst. Every relevant signal, in a single view.

What the screen is actually doing

Every column in the layout carries a distinct job. The left column answers who. The center column answers what happened. The right column answers how certain we are and what to do next. Together they replace a single alert with a complete picture.

The three columns

Who: Human context

The left column establishes who the person is before showing what they did. Tenure, clearance, manager, and device make every other signal meaningful.

Identity + Attribute Grid:

Six attributes that frame how to read everything else. Clearance and tenure aren't description, they're calibration.

Identity + Attribute Grid:

Six attributes that frame how to read everything else. Clearance and tenure aren't description, they're calibration.

Risk Score + Tags:

The number means nothing without the baseline. +41 above peer average is what makes 74 alarming. The watchlist tag is filled with amber - the only manually assigned value on the screen, and the only one that implies a prior human judgment.

Risk Score + Tags:

The number means nothing without the baseline. +41 above peer average is what makes 74 alarming. The watchlist tag is filled with amber - the only manually assigned value on the screen, and the only one that implies a prior human judgment.

Share Activity Chart:

The peak at 14:30 aligns exactly with the incident timestamp. The visual makes the connection before the analyst has to look for it.

Share Activity Chart:

The peak at 14:30 aligns exactly with the incident timestamp. The visual makes the connection before the analyst has to look for it.

Exfiltration Vectors Grid:

Four channels, four independent checks. USB at 0B doesn't mean safe - it means that route was blocked.

Exfiltration Vectors Grid:

Four channels, four independent checks. USB at 0B doesn't mean safe - it means that route was blocked.

Related Incidents:

Three incidents over 90 days turn this from a suspicion into a documented pattern.

Related Incidents:

Three incidents over 90 days turn this from a suspicion into a documented pattern.

What happened:Incident timeline

This section shows how Signal compresses a messy cross-system event stream into three layers: incident summary, temporal reconstruction, and evidence context. Each layer answers a different analyst question.

Summary Layer:

Computed risk fields sit above the timeline so the analyst starts with severity, scope, recipient risk, and containment before reading the event stream.

Summary Layer:

Computed risk fields sit above the timeline so the analyst starts with severity, scope, recipient risk, and containment before reading the event stream.

Timeline Layer:

Events from multiple systems are normalized into one sequence, with color used as a fast severity cue so escalation points are visible even before reading every row.

Timeline Layer:

Events from multiple systems are normalized into one sequence, with color used as a fast severity cue so escalation points are visible even before reading every row.

Evidence Layer:

The file becomes the anchor object. Metadata, classification, ownership, and confidence explain why this specific action triggered the incident.

Evidence Layer:

The file becomes the anchor object. Metadata, classification, ownership, and confidence explain why this specific action triggered the incident.

What to do: Response

The right column produces a score the analyst can interrogate. Every contributing signal is listed with its weight — so the model can be overridden, not just accepted.

Intent Score:

The model output is exposed as an interpretable score, not a black box. The gauge gives the analyst a fast read on confidence and direction: accidental versus deliberate.

Weighted Signals:

Every contributing signal is shown with its positive or negative weight. This lets the analyst understand why the score moved, challenge weak evidence, and override the model if needed.

Recommended Actions:

Response options are ranked from the same evidence model. The system turns the investigation context into concrete next steps while keeping the analyst in control of what actually runs.

What This Proves

Signal demonstrates that enterprise security tools can communicate in human terms without sacrificing technical depth. Designing for data-dense B2B environments doesn't mean accepting visual chaos. It means building structure that holds under pressure. The intent model, the behavioral timeline, and the action hierarchy all speak the language of the domain: they show that good design isn't decoration. It's operational clarity.

If the work made sense to you,

let's talk.

© 2026 Guy Bar-Sinai