![]() ![]() Rows = tf.range(tf.shape(masked_new_probs)) Masked_new_probs = ENVIRONMENT.mass_apply_mask(new_probs.numpy(), env_states) New_probs, new_val = self.cnn_actor_critic() With tf.GradientTape(persistent=True) as tape: Reward_diff = reward_arr + values * (1 - tf.cast(trades_complete, dtype=tf.float32)) - valuesĪdvantage = tf.concat(, axis=0) State_arr, additional_info, action_arr, old_prob_arr, values, reward_arr, _, trades_complete, env_states, batches = _batches() # generate batches The warning occurs in the following function: def learn(self): For (3), please refer to and for more details. For (2), has reduce_retracing=True option that can avoid unnecessary retracing. For (1), please define your outside of the loop. Tracing is expensive and the excessive number of tracings could be due to (1) creating repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. ![]() WARNING:tensorflow:6 out of the last 6 calls to triggered tf.function retracing. I'm getting a memory leak and I believe it to be linked to the following warning: ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |