Today, we’ll be talking about how units in Sea Power handle sensor detections and communicate that information to one another.
Sea Power takes place at the dawn of computerized combat information systems and digital datalinks. The Naval Tactical Data System (NTDS), Tactical Datalinks (TDL), New Threat Upgrade (NTU), and AEGIS in the US and the Soviet More (Mope) family of systems were in the process of revolutionizing how battlespace information was managed. Where once a detection would need to be manually placed on a plotting table and laboriously updated, a computer could ingest, maintain, and communicate that information across the rest of the force.
Sea Power uses a simplified system based on our best understanding of how real sensor systems work to provide you with a view of the battlespace that is, we hope, remarkably true to the view a task force commander would have while orchestrating combat today.
In the real world, there are three major intermediate representations between the raw sensor output and the combat information display:
- Plots describe the presence or absence of a detectable object at a given position and time, according to a specific sensor, not what the object may be or any of its history. Sea Power does not simulate the plot stage of detection.
- Tracks describe the presence of a specific object at a given position and time as seen by one sensor; tracks are formed out of plots by aggregating “close” results that likely came from a single target through a process referred to as track extraction.
Tracks are the lowest level of sensor output in Sea Power. A Sea Power track is comprised of several optional sensed properties including:
- Heading and bearing from the sensor
- Measured velocity
- Environment (e.g. surface or air)
A track need only provide one field; all others may be missing. For example, a passive sonar is only able to tell you what the environment, class, and bearing of a target is, but cannot figure out what the range or the true operator is (if there are multiple different countries who operate the same class).
- Vehicles are built out of multiple tracks and denote the presence of a target as well as its history; a vehicle represents all information that is sensed about a target at a given time combined with the historical information that has been accumulated about the target since it was first detected.
Vehicles are constructed by combining tracks over time by taking the best quality or most recent data. A vehicle in Sea Power consists of the current best estimate of the target’s position and velocity as well as all of the other track parameters.
Sea Power’s vehicle combination approach is straightforward. Working over each track provided to the system, we determine the world-space position of the sensor by combining the heading and bearing information provided by the track. This is then augmented with the other state information from the track, giving preference to whichever has the lowest error. The process is repeated until all tracks have been ingested.
|Radar Track||ESM Track||Synthesized Vehicle||True Vehicle|
|Position||10 km @ 120° ± 1%||130° ± 10%||10 km @ 120° ± 1%||10.5km @ 121°|
|Kind||Aircraft||Aircraft or Ship||Aircraft||Aircraft|
|Type||N/A||SH-2F or Adams-class DDG||SH-2F||SH-2F|
In the above example we are synthesizing a vehicle representing an SH-2F Seasprite helicopter detected by both radar and Electronic Support Measures (ESM). The radar can determine that the target is an aircraft and provide a relatively precise position. Meanwhile, the ESM system is having problems: the target is emitting using a LN-66 radar, but this radar is used on both the SH-2F and the Adams-class DDG (among others). The ESM system also only resolves a low-quality bearing and no range information whatsoever. The vehicle combination system merges these results by selecting the radar’s positional fix, since it is the most accurate, while determining that the target is an SH-2F by picking the ESM-derived type that is consistent with the radar-derived kind (for a DDG is not going to be flying unless something has gone very wrong).
Sea Power only performs this vehicle combination process locally; it does not attempt to merge tracks from sensors on different platforms to further refine results. For example, if two radar tracks are produced by two different units on diverse bearings then Sea Power will not use that information to further refine the vehicle position estimate. This lack of fusion matches the limitations of era Combat Management Systems (CMS), though you may have heard that a number of nations are working on a capability to do just this. However, we need to show you (and the AI) a representation of “all that is knowable to you” somehow.
In order to be able to give you a minimap that is useful, we provide a force-wide sensor picture. This overall sensor picture is constructed by combining each of the vehicles produced by each unit into a “best estimate” picture for every target that simply picks the highest-accuracy solution for each vehicle property. This best estimate is the sensor picture that’s shown to you on the minimap and is used by the AI to make tactical decisions for all but submerged submarines. You’ll be pleased to know that this approach is based on what data is publicly available about the TDL systems that have been in use throughout the world since the dawn of the missile age.
Here we see a simple (variable time compressed) example of vehicle synthesis in action. The detection sequence, against a hostile Kinda-class RKR, goes as follows:
- Initially, our positional fix is derived from target motion analysis based on passive sonar from two Oliver Hazard Perry-class frigates, one of which is on screen and the other of which is to the north of the shown area.
- The A-7 then is able to get a visual (non-identifying) contact on the ship, giving us its position far more accurately than TMA can, but not its identity.
- This is further refined by the aircraft once it gets close enough to identify the target.
- The plane then overflies the Kinda, losing sight of it…
- Which causes the visual positional fix to time out, causing the best positional solution to become the one from target motion analysis. The visual target ID is retained even once the positional source changes.
- In the interim, the TMA system has reinitialized its target fix based on the visual contact so its uncertainty area resumes where the visual fix left off.
- Then the A-7 turns around and re-acquires the visual track, locking down the target position…
- Which is then lost once it flies past the target, again reverting to the TMA solution.
This example shows how the plotting system will take the best available fix and synthesize that with past information to show you an accurate sensor picture. We aren’t doing sensor fusion, as mentioned, so we’re merely taking the best available fix rather than trying to bolt them on top of one another. However, there’s an exception to this rule: the aforementioned target motion analysis system.
Religiously adhering to our “no sensor fusion” rule would give us a serious inaccuracy. Era TDL systems allow anti-submarine warfare platforms to share bearing-only tracks to allow for multi-platform target motion analysis. In turn, by getting multiple different units to report bearings to the same target, TDL-equipped ASW platforms could triangulate a target’s position and thus estimate its position and velocity. We, too, allow this by implementing a model of TMA based on a Bayesian target state estimator, whose uncertainty region is shown as the ellipses in our plotting example. We’ll dive into more depth as to how this system works in a future update, so stay tuned!