7 Metrics Every Port Security Manager Should Track Daily

Port security metrics determine whether your terminal's security posture is genuinely effective or merely theatrical. Yet most port facility security officers (PFSOs) operate without a standardized dashboard of daily indicators. They rely on incident reports (lagging indicators), periodic audits (snapshots), and subjective assessments from patrol teams. In an era where the IMO's ISPS Code demands demonstrable, continuous security performance, this approach is insufficient. Here are the seven port security metrics that every security manager should be tracking daily — and what each one reveals about your terminal's actual security posture.

1. How Many Camera Feeds Are Operational Right Now?

Metric: Camera uptime percentage (target: above 98%)

This is the most fundamental metric and the most frequently neglected. A 500-camera system with 12% of units degraded or offline — a figure consistent with IAPH's 2025 survey of member ports — has 60 blind spots at any given moment. That is not a minor maintenance issue. It is a systemic coverage failure.

Track the percentage of cameras delivering usable footage in real time. "Usable" means the feed is live, the image quality meets minimum detection requirements, and the camera is pointing where it should be. Automated camera health monitoring systems can detect failures within seconds, compared to the weeks or months that manual checks typically require.

2. What Is Your Mean Time to Detect (MTTD)?

Metric: Average time between event occurrence and initial detection (target: under 30 seconds)

MTTD measures how quickly your security operation identifies a potentially significant event — an unauthorized zone entry, a perimeter breach, an unattended object. Traditional CCTV operations with human-only monitoring typically report MTTD of 5–15 minutes for events that are detected at all. AI-augmented systems reduce MTTD to under 10 seconds for events within camera coverage.

This metric directly reflects your ability to respond before an incident escalates. The UK Centre for the Protection of National Infrastructure (CPNI) recommends that critical infrastructure facilities target an MTTD of under 60 seconds for perimeter intrusions.

3. What Is Your Mean Time to Respond (MTTR)?

Metric: Average time between detection and response initiation (target: under 3 minutes)

MTTR captures the interval between an alert being generated and a response action being initiated — dispatching a patrol, locking down a zone, or escalating to a supervisor. While MTTD measures system performance, MTTR measures operational readiness.

Track MTTR by zone, shift, and alert type. Patterns emerge quickly: perhaps nightshift MTTR is 3x longer than dayshift, or certain zones consistently show delayed response because of patrol routing inefficiencies. ISPS auditors increasingly request evidence of response time performance as part of facility security assessments.

4. What Is Your Alert-to-False-Positive Ratio?

Metric: Percentage of generated alerts that are confirmed as non-events (target: below 10%)

High false positive rates are the silent killer of security effectiveness. When 40–60% of alerts are false alarms — common in legacy motion-detection systems — operators develop "alert fatigue" and begin ignoring or slow-rolling responses. ASIS International research indicates that security teams experiencing chronic false alarm rates above 30% show measurable degradation in response to genuine threats.

Track this ratio daily. If it spikes, investigate whether environmental conditions (weather, lighting changes), maintenance issues, or system configuration problems are the cause. Modern AI detection systems maintain false positive rates below 5% through multi-factor validation.

5. How Many Access Control Exceptions Occurred?

Metric: Daily count of access anomalies — tailgating, credential failures, unauthorized zone entries (target: declining trend)

Access control exceptions include failed badge attempts, door-held-open events, tailgating detections, and unauthorized zone entries identified by AI enforcement systems. Each exception represents either a security event or a process failure — both warrant attention.

Track exception counts by access point, time of day, and personnel category. Persistent exceptions at specific locations indicate infrastructure problems (broken door closers, poorly positioned readers). Exceptions concentrated around shift changes suggest process gaps. Any upward trend demands investigation.

6. What Is Your ISPS Drill Compliance Rate?

Metric: Percentage of scheduled security drills and exercises completed on time (target: 100%)

The ISPS Code requires regular security drills and exercises at intervals specified in the facility security plan. Missing or delaying these drills is one of the most commonly cited deficiencies in port state control inspections. According to US Coast Guard annual inspection statistics, drill documentation deficiencies appear in approximately 15–20% of ISPS facility examinations.

Track not just whether drills occurred, but their documented outcomes — lessons learned, response times measured during drills, and corrective actions identified. This metric is both an operational readiness indicator and a direct compliance requirement.

7. What Is Your Incident Documentation Completeness Score?

Metric: Percentage of incidents with complete documentation — footage, timestamps, decisions, outcomes (target: above 95%)

When an incident occurs, your documentation quality determines your legal position, your insurance claim viability, and your regulatory compliance status. Incomplete incident records — missing footage, unsynchronized timestamps, undocumented decisions — create liability exposure.

Score each incident on documentation completeness: was the triggering event captured on video? Are timestamps synchronized across all systems? Is the decision chain documented (who was alerted, what action was taken, what was the outcome)? Systems built on decision engine architecture generate this documentation automatically. Legacy systems require manual compilation that is frequently incomplete.

How Should These Metrics Be Reported?

Build a daily security dashboard that displays these seven metrics with trend lines over 7, 30, and 90-day windows. The dashboard should be visible to the PFSO, terminal operations management, and — in aggregate form — to the company security officer (CSO) responsible for overall maritime security governance.

BIMCO recommends that port facility security metrics be reviewed in monthly security committee meetings, with daily tracking used for operational management and anomaly detection. ISO 28000 (security management systems for the supply chain) similarly requires documented measurement of security performance.

Key Takeaway

These seven port security metrics transform security management from a subjective, reactive discipline into a data-driven, proactive operation. They provide early warning of degrading performance, evidence for compliance audits, and a shared language for communicating security posture to stakeholders. If you cannot answer all seven questions about your terminal right now, that gap itself is the most important finding of the day.