relife.renewal_process.RenewalRewardProcess

relife.renewal_process.RenewalRewardProcess

class relife.renewal_process.RenewalRewardProcess(model: relife.model.LifetimeModel, reward: relife.reward.Reward, model1: typing.Optional[relife.model.LifetimeModel] = None, reward1: typing.Optional[relife.reward.Reward] = None, discount: relife.discounting.Discount = <relife.discounting.ExponentialDiscounting object>)[source]

Bases: relife.renewal_process.RenewalProcess

Renewal reward process.

Creates a renewal reward process.

Parameters
  • model (LifetimeModel) – A lifetime model representing the durations between events.

  • reward (Reward) – A reward associated to the interarrival time.

  • model1 (LifetimeModel, optional) – A lifetime model for the first renewal (delayed renewal process), by default None.

  • reward1 (Reward, optional) – A reward associated to the first renewal, by default None

  • discount (Discount, optional) – A discount function related to the rewards, by default ExponentialDiscounting()

Methods

asymptotic_expected_equivalent_annual_worth

Asymptotic expected equivalent annual worth.

asymptotic_expected_total_reward

Asymptotic expected total reward.

expected_equivalent_annual_worth

Expected equivalent annual worth.

expected_total_reward

The expected total reward.

renewal_density

The renewal density.

renewal_function

The renewal function.

sample

Renewal reward data sampling.

expected_total_reward(t: numpy.ndarray, model_args: Tuple[numpy.ndarray, ...] = (), reward_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = (), reward1_args: Tuple[numpy.ndarray, ...] = (), discount_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray[source]

The expected total reward.

Parameters
  • t (1D array) – Timeline.

  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • reward_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

  • reward1_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward of the first renewal, by default ().

  • discount_args (Tuple[ndarray,...], optional) – Extra arguments required by the discount function, by default ().

Returns

Expected total reward of process evaluated at each point of the timeline.

Return type

ndarray

Raises

NotImplementedError – If the discount function is not exponential.

Notes

The renewal equation solved by the expected reward is:

\[z(t) = \int_0^t E[Y | X = x] D(x) \mathrm{d}F(x) + \int_0^t z(t-x) D(x)\mathrm{d}F(x)\]

where:

  • \(z\) is the expected total reward,

  • \(F\) is the cumulative distribution function of the underlying lifetime model,

  • \(X\) the interarrival random variable,

  • \(Y\) the associated reward,

  • \(D\) the exponential discount factor.

If the renewal reward process is delayed, the expected total reward is modified as:

\[z_1(t) = \int_0^t E[Y_1 | X_1 = x] D(x) \mathrm{d}F_1(x) + \int_0^t z(t-x) D(x) \mathrm{d}F_1(x)\]

where:

  • \(z_1\) is the expected total reward with delay,

  • \(F_1\) is the cumulative distribution function of the lifetime model for the first renewal,

  • \(X_1\) the interarrival random variable of the first renewal,

  • \(Y_1\) the associated reward of the first renewal,

expected_equivalent_annual_worth(t: numpy.ndarray, model_args: Tuple[numpy.ndarray, ...] = (), reward_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = (), reward1_args: Tuple[numpy.ndarray, ...] = (), discount_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray[source]

Expected equivalent annual worth.

Gives the equivalent annual worth of the expected total reward of the process at each point of the timeline.

Parameters
  • t (1D array) – Timeline.

  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • reward_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

  • reward1_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward of the first renewal, by default ().

  • discount_args (Tuple[ndarray,...], optional) – Extra arguments required by the discount function, by default ().

Returns

The expected equivalent annual worth evaluated at each point of the timeline.

Return type

ndarray

Notes

The equivalent annual worth at time \(t\) is equal to the expected total reward \(z\) divided by the annuity factor \(AF(t)\).

asymptotic_expected_total_reward(model_args: Tuple[numpy.ndarray, ...] = (), reward_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = (), reward1_args: Tuple[numpy.ndarray, ...] = (), discount_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray[source]

Asymptotic expected total reward.

Parameters
  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • reward_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

  • reward1_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward of the first renewal, by default ().

  • discount_args (Tuple[ndarray,...], optional) – Extra arguments required by the discount function, by default ().

Returns

The assymptotic expected total reward of the process.

Return type

ndarray

Raises

NotImplementedError – If the discount function is not exponential.

Notes

The asymptotic expected total reward is:

\[z^\infty = \lim_{t\to \infty} z(t) = \dfrac{E[Y D(X)]}{1-E[D(X)]}\]

where:

  • \(X\) the interarrival random variable,

  • \(Y\) the associated reward,

  • \(D\) the exponential discount factor.

If the renewal reward process is delayed, the asymptotic expected total reward is modified as:

\[z_1^\infty = E[Y_1 D(X_1)] + z^\infty E[D(X_1)]\]

where:

  • \(X_1\) the interarrival random variable of the first renewal,

  • \(Y_1\) the associated reward of the first renewal,

asymptotic_expected_equivalent_annual_worth(model_args: Tuple[numpy.ndarray, ...] = (), reward_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = (), reward1_args: Tuple[numpy.ndarray, ...] = (), discount_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray[source]

Asymptotic expected equivalent annual worth.

Parameters
  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • reward_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

  • reward1_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward of the first renewal, by default ().

  • discount_args (Tuple[ndarray,...], optional) – Extra arguments required by the discount function, by default ().

Returns

The asymptotic expected equivalent annual worth of the process.

Return type

ndarray

Raises

NotImplementedError – If the discount function is not exponential.

sample(T: float, model_args: Tuple[numpy.ndarray, ...] = (), reward_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = (), reward1_args: Tuple[numpy.ndarray, ...] = (), discount_args: Tuple[numpy.ndarray, ...] = (), n_samples: int = 1, random_state: Optional[int] = None) relife.data.RenewalRewardData[source]

Renewal reward data sampling.

Parameters
  • T (float) – Time at the end of the observation.

  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • reward_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

  • reward1_args (Tuple[ndarray,...], optional) – Extra arguments required by the associated reward of the first renewal, by default ().

  • discount_args (Tuple[ndarray,...], optional) – Extra arguments required by the discount function, by default ().

  • n_samples (int, optional) – Number of samples, by default 1.

  • random_state (int, optional) – Random seed, by default None.

Returns

Samples of replacement times, durations and rewards.

Return type

RenewalRewardData

renewal_density(t: numpy.ndarray, model_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray

The renewal density.

Parameters
  • t (1D array) – Timeline.

  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

Returns

Renewal density evaluated at each point of the timeline.

Return type

ndarray

Raises

NotImplementedError – If the lifetime model is not absolutely continuous.

Notes

The renewal density is the derivative of the renewal function with respect to time. It is computed by solving the renewal equation:

\[\mu(t) = f_1(t) + \int_0^t \mu(t-x) \mathrm{d}F(x)\]

where:

  • \(\mu\) is the renewal function,

  • \(F\) is the cumulative distribution function of the underlying lifetime model,

  • \(f_1\) is the probability density function of the underlying lifetime model for the fist renewal in the case of a delayed renewal process.

References

1

Rausand, M., Barros, A., & Hoyland, A. (2020). System Reliability Theory: Models, Statistical Methods, and Applications. John Wiley & Sons.

renewal_function(t: numpy.ndarray, model_args: Tuple[numpy.ndarray, ...] = (), model1_args: Tuple[numpy.ndarray, ...] = ()) numpy.ndarray

The renewal function.

Parameters
  • t (1D array) – Timeline.

  • model_args (Tuple[ndarray,...], optional) – Extra arguments required by the underlying lifetime model, by default ().

  • model1_args (Tuple[ndarray,...], optional) – Extra arguments required by the lifetime model of the first renewal, by default ().

Returns

The renewal function evaluated at each point of the timeline.

Return type

ndarray

Notes

The expected total number of renewals is computed by solving the renewal equation:

\[m(t) = F_1(t) + \int_0^t m(t-x) \mathrm{d}F(x)\]

where:

  • \(m\) is the renewal function,

  • \(F\) is the cumulative distribution function of the underlying lifetime model,

  • \(F_1\) is the cumulative distribution function of the underlying lifetime model for the fist renewal in the case of a delayed renewal process.

References

1

Rausand, M., Barros, A., & Hoyland, A. (2020). System Reliability Theory: Models, Statistical Methods, and Applications. John Wiley & Sons.