dc.contributor.author | Ye, Juan Juan. | en_US |
dc.date.accessioned | 2014-10-21T12:34:53Z | |
dc.date.available | 1990 | |
dc.date.issued | 1990 | en_US |
dc.identifier.other | AAINN64513 | en_US |
dc.identifier.uri | http://hdl.handle.net/10222/55204 | |
dc.description | This thesis describes a complete theory of optimal control of piecewise deterministic Markov processes under weak assumptions. The theory consists of a description of the processes, a nonsmooth stochastic maximum principle as a necessary optimality condition, a generalized Bellman-Hamilton-Jacobi necessary and sufficient optimality condition involving the Clarke generalized gradient, existence results and regularity properties of the value function. The impulse control problem is transformed to an equivalent optimal dynamic control problem. Cost functions are subject only to growth conditions. | en_US |
dc.description | Piecewise deterministic Markov processes, termed PDPs for short, are continuous time homogeneous Markov processes consisting of a mixture of deterministic motion and random jumps. PDPs, with stochastic jump processes and deterministic dynamical systems as special cases, include virtually all of the stochastic models of applied probability except diffusions. Their impulse control extends their applicability to discrete event problems such as stochastic scheduling. The processes are controlled by an open loop control depending on the postjump state and the time elapsed since the last jump in the interior of the state space, a feedback control on the boundary of the state space and impulse controls on the entire state space. The expected value of a performance functional of integral type with additional boundary and impulse costs is to be minimized. | en_US |
dc.description | The PDP optimal control problem is converted to an infinite horizon discrete-time stochastic optimal control problem and it is shown that the optimal strategy for control of a PDP is to choose after each jump a control function which is an optimal control in a corresponding deterministic control problem where the state of the system is required to stop at the boundary. This deterministic control problem is however non-standard in that the terminal time is not fixed but instead is either infinity or the first time the trajectory reaches the boundary of the state space. As preliminary results, we obtain a nonsmooth maximum principle as a necessary optimality condition and a necessary and sufficient optimality condition in terms of a generalized Bellman-Hamilton-Jacobi equation involving the Clarke generalized gradient for the deterministic problem. The desired results then follow in a straight-forward manner. | en_US |
dc.description | Thesis (Ph.D.)--Dalhousie University (Canada), 1990. | en_US |
dc.language | eng | en_US |
dc.publisher | Dalhousie University | en_US |
dc.publisher | | en_US |
dc.subject | Operations Research. | en_US |
dc.title | Optimal control of piecewise deterministic Markov processes. | en_US |
dc.type | text | en_US |
dc.contributor.degree | Ph.D. | en_US |