Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: too many values to unpack (expected 3) #14

Open
joeljosephjin opened this issue Apr 30, 2020 · 8 comments
Open

ValueError: too many values to unpack (expected 3) #14

joeljosephjin opened this issue Apr 30, 2020 · 8 comments

Comments

@joeljosephjin
Copy link

joeljosephjin commented Apr 30, 2020

this command python eval.py --agent-config baselines/config/random-agent.yaml --episode-config config/check-ground-truth.yaml gives the following error:

Set current directory to /home/elian/goseek-challenge
Found path: /home/elian/goseek-challenge/simulator/goseek-v0.1.4.x86_64
Mono path[0] = '/home/elian/goseek-challenge/simulator/goseek-v0.1.4_Data/Managed'
Mono config path = '/home/elian/goseek-challenge/simulator/goseek-v0.1.4_Data/MonoBleedingEdge/etc'
Preloaded 'ScreenSelector.so'
Display 0 '0': 1366x768 (primary device).
Display 1 'W1642 16"': 1024x768 (secondary device).
Logging to /home/elian/.config/unity3d/Editor/Player.log
Evaluation episode on episode 0, scene 3
Traceback (most recent call last):
  File "eval.py", line 85, in <module>
    results = main(episode_cfg, agent_args)
  File "eval.py", line 66, in main
    return benchmark.evaluate(agent)
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek_benchmark.py", line 97, in evaluate
    scene_id=self.scenes[episode], random_seed=self.random_seeds[episode]
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek.py", line 138, in reset
    super().reset(scene_id, random_seed)
  File "/home/elian/tesse-gym/src/tesse_gym/core/tesse_gym.py", line 250, in reset
    return self.form_agent_observation(observation)
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek_full_perception.py", line 68, in form_agent_observation
    eo, seg, depth = tesse_data.images
ValueError: too many values to unpack (expected 3)

Also, my laptop does not have a GPU.

@joeljosephjin
Copy link
Author

it was because i was using multiple monitors at the same time. When i disconnected the others, another error came up

Set current directory to /home/elian/goseek-challenge
Found path: /home/elian/goseek-challenge/simulator/goseek-v0.1.4.x86_64
Mono path[0] = '/home/elian/goseek-challenge/simulator/goseek-v0.1.4_Data/Managed'
Mono config path = '/home/elian/goseek-challenge/simulator/goseek-v0.1.4_Data/MonoBleedingEdge/etc'
Preloaded 'ScreenSelector.so'
Display 0 '0': 1366x768 (primary device).
Logging to /home/elian/.config/unity3d/Editor/Player.log
Evaluation episode on episode 0, scene 3
Traceback (most recent call last):
  File "eval.py", line 85, in <module>
    results = main(episode_cfg, agent_args)
  File "eval.py", line 66, in main
    return benchmark.evaluate(agent)
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek_benchmark.py", line 97, in evaluate
    scene_id=self.scenes[episode], random_seed=self.random_seeds[episode]
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek.py", line 138, in reset
    super().reset(scene_id, random_seed)
  File "/home/elian/tesse-gym/src/tesse_gym/core/tesse_gym.py", line 250, in reset
    return self.form_agent_observation(observation)
  File "/home/elian/tesse-gym/src/tesse_gym/tasks/goseek/goseek_full_perception.py", line 79, in form_agent_observation
    axis=-1,
  File "<__array_function__ internals>", line 6, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 240 and the array at index 1 has size 768

@joeljosephjin
Copy link
Author

i tried the command again after a few minutes and it worked :)

@ZacRavichandran
Copy link
Member

Good to hear!

@joeljosephjin
Copy link
Author

joeljosephjin commented May 7, 2020

I am now trying the ppo2 sample ipython notebook on my laptop(i5 5th gen processor) which doesn't have a GPU.
The "python eval.py" code is working.

But "model.learn(...)" in the ipython notebook shows this error-"ValueError- too many values to unpack".

  1. Does goseek competition require a GPU(like for submitting the docker for eg.)?

  2. I am getting two windows(where the simulation is rendered) after the "env" is created in the ipython notebook. Is it normal?

@ZacRavichandran
Copy link
Member

Could you print the whole stacktrace? That may help pinpoint the issue.

As for your questions:

  1. Nope! You can run (almost) everything locally without a GPU. And once you submit a solution via EvalAI, it will be run on one of our GPU enabled AWS instances. The only component that requires a GPU is the semantic segmentation network in the perception pipeline, but that can be disabled.

  2. Yep! In the example notebook there is the variable n_environments = 2. This determines the number of environments used for training and can be adjusted as needed.

@joeljosephjin
Copy link
Author

joeljosephjin commented May 9, 2020

This is the error:

WARNING:tensorflow:From /home/elian/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/common/base_class.py:1143: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-13-c1c38248fd91> in <module>
----> 1 model.learn(total_timesteps=total_timesteps)#, callback=save_checkpoint_callback)

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/ppo2/ppo2.py in learn(self, total_timesteps, callback, log_interval, tb_log_name, reset_num_timesteps)
    334                 callback.on_rollout_start()
    335                 # true_reward is the reward without discount
--> 336                 rollout = self.runner.run(callback)
    337                 # Unpack
    338                 obs, returns, masks, actions, values, neglogpacs, states, ep_infos, true_reward = rollout

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/common/base_class.py in runner(self)
    792     def runner(self) -> AbstractEnvRunner:
    793         if self._runner is None:
--> 794             self._runner = self._make_runner()
    795         return self._runner
    796 

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/ppo2/ppo2.py in _make_runner(self)
     98     def _make_runner(self):
     99         return Runner(env=self.env, model=self, n_steps=self.n_steps,
--> 100                       gamma=self.gamma, lam=self.lam)
    101 
    102     def _get_pretrain_placeholders(self):

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/ppo2/ppo2.py in __init__(self, env, model, n_steps, gamma, lam)
    447         :param lam: (float) Factor for trade-off of bias vs variance for Generalized Advantage Estimator
    448         """
--> 449         super().__init__(env=env, model=model, n_steps=n_steps)
    450         self.lam = lam
    451         self.gamma = gamma

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/common/runners.py in __init__(self, env, model, n_steps)
     29         self.batch_ob_shape = (n_envs * n_steps,) + env.observation_space.shape
     30         self.obs = np.zeros((n_envs,) + env.observation_space.shape, dtype=env.observation_space.dtype.name)
---> 31         self.obs[:] = env.reset()
     32         self.n_steps = n_steps
     33         self.states = model.initial_state

~/miniconda3/envs/goseek/lib/python3.7/site-packages/stable_baselines/common/vec_env/dummy_vec_env.py in reset(self)
     57     def reset(self):
     58         for env_idx in range(self.num_envs):
---> 59             obs = self.envs[env_idx].reset()
     60             self._save_obs(env_idx, obs)
     61         return self._obs_from_buf()

~/tesse-gym/src/tesse_gym/tasks/goseek/goseek.py in reset(self, scene_id, random_seed)
    136             np.ndarray: Agent's observation. """
    137         self.env.send(EpisodeResetSignal())
--> 138         super().reset(scene_id, random_seed)
    139 
    140         self.env.request(RemoveObjectsRequest())

~/tesse-gym/src/tesse_gym/core/tesse_gym.py in reset(self, scene_id, random_seed)
    248 
    249         self._init_pose(observation.metadata)
--> 250         return self.form_agent_observation(observation)
    251 
    252     def render(self, mode: str = "rgb_array") -> np.ndarray:

~/tesse-gym/src/tesse_gym/tasks/goseek/goseek_full_perception.py in form_agent_observation(self, tesse_data)
     66                 pose vector. To recover images and pose, see `decode_observations` below.
     67         """
---> 68         eo, seg, depth = tesse_data.images
     69         seg = seg[..., 0].copy()  # get segmentation as one-hot encoding
     70 

ValueError: too many values to unpack (expected 3)

Ouptut of print(len(tesse_data.images)): 6

@joeljosephjin joeljosephjin reopened this May 10, 2020
@joeljosephjin
Copy link
Author

joeljosephjin commented May 10, 2020

After re running the code several times, it did start to work but then it would again give 6,9 or 12 length tuples after 8000 iterations. I'll try to examine the contents of this bigger tuple :)

@ZacRavichandran
Copy link
Member

That's for looking the contents, I'd be curious to see what you find!

It's an interesting error because the number of images should correspond to the number of modalities requested (e.g. RGB, segmentation, depth) which is defined here

It could be helpful to get the shape of the returned images

for img in tesse_data.images:
   print(img.shape)

This will help us confirm that the number of environments and resolution is as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants