## Abstract

This paper takes advantage of recent results on the probabilistic modeling of the

ocean acoustic detection process to develop two approximate procedures for tackling a simplified version of the m-sensor, n-target resource allocation problem. The first procedure is termed "myopic" (short-sighted) and applies if we are interested to maximize the expected number of targets held in the short-run. This second procedure is termed "presbyopic" (far-sighted) and applies if we are interested to obtain the maximum expected number of targets held in the long-run, that is, when the system is in the steady state. Both approaches are suboptimal because they neglect, each in a different way, the interdependence

of allocation decisions through time.

We formulate the myopic case as a Linear Programming "Assignment" optimization problem whose inputs are dynamically updated through time. Then we do the same for the presbyopic case. It is seen that if a presbyopic policy is followed, no switching decisions will ever occur. We then extend the myopic formulation to incorporate a general target-holding reward function as well as switching costs. We present some illustrative examples, applying both approaches to simulated data. We also present a comparison of both methods with an exact Stochastic Dynamic Programming approach we had developed earlier for problems of very small size.

ocean acoustic detection process to develop two approximate procedures for tackling a simplified version of the m-sensor, n-target resource allocation problem. The first procedure is termed "myopic" (short-sighted) and applies if we are interested to maximize the expected number of targets held in the short-run. This second procedure is termed "presbyopic" (far-sighted) and applies if we are interested to obtain the maximum expected number of targets held in the long-run, that is, when the system is in the steady state. Both approaches are suboptimal because they neglect, each in a different way, the interdependence

of allocation decisions through time.

We formulate the myopic case as a Linear Programming "Assignment" optimization problem whose inputs are dynamically updated through time. Then we do the same for the presbyopic case. It is seen that if a presbyopic policy is followed, no switching decisions will ever occur. We then extend the myopic formulation to incorporate a general target-holding reward function as well as switching costs. We present some illustrative examples, applying both approaches to simulated data. We also present a comparison of both methods with an exact Stochastic Dynamic Programming approach we had developed earlier for problems of very small size.

Original language | English |
---|---|

Publication date | 1982 |

Number of pages | 8 |

Publication status | Published - 1982 |

Externally published | Yes |

Event | 5^{th} MIT/ONR Symposium on Command and Control - Monterey, United StatesDuration: 23 Aug 1982 → 27 Aug 1982 |

### Conference

Conference | 5^{th} MIT/ONR Symposium on Command and Control |
---|---|

Country/Territory | United States |

City | Monterey |

Period | 23/08/1982 → 27/08/1982 |