Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kpts0.shape error when I want to matching a object img and the video capture has the object #95

Open
TuanBao0711 opened this issue Nov 28, 2023 · 2 comments

Comments

@TuanBao0711
Copy link

I', trying matching a Drone in one img to the video has the drone, but this not work, this work went matching this img and a img capture from video but when I can't matching img vs capture video frame. Some one help me.

Error:
File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\testlightglue.py", line 69, in
matches01 = matcher({'image0': feats0, 'image1': feats1})
File "C:\Users\TuanBao\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\TuanBao\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\LightGlue\lightglue\lightglue.py", line 463, in forward
return self._forward(data)
File "C:\Users\TuanBao\Desktop\My_Docs\CNTT\imgMatching\lightGlue\LightGlue\lightglue\lightglue.py", line 470, in _forward
b, m, _ = kpts0.shape
ValueError: not enough values to unpack (expected 3, got 2)

this is my testlightglue.py scripts:

from lightglue import LightGlue, SuperPoint, DISK, SIFT, ALIKED
from lightglue.utils import load_image, rbd, numpy_image_to_torch
from lightglue import viz2d
import cv2

extractor = SuperPoint(max_num_keypoints=2048).eval().cuda() # load the extractor
matcher = LightGlue(features='superpoint').eval().cuda()

imgObject = cv2.imread('img/2.jpg') #the drone img
image0 = numpy_image_to_torch(imgObject).cuda()
feats0 = extractor.extract(image0)

cap = cv2.VideoCapture('video/RGB.mp4') #the drone video
while cap.isOpened():
ret, frame = cap.read()
image1 = numpy_image_to_torch(frame).cuda()

feats1 = extractor.extract(image1)
# load the matcher

matches01 = matcher({'image0': feats0, 'image1': feats1})
feats0, feats1, matches01 = [rbd(x) for x in [feats0, feats1, matches01]]  # remove batch dimension
kpts0, kpts1, matches = feats0["keypoints"], feats1["keypoints"], matches01["matches"]
m_kpts0, m_kpts1 = kpts0[matches[..., 0]], kpts1[matches[..., 1]]


m_kpts0_numpy = m_kpts1.cpu().numpy()
m_kpts0_cv2 = m_kpts0_numpy.round().astype(int)

for keypoint in m_kpts0_cv2:
    x, y = keypoint
    frame = cv2.circle(frame, (x, y), 5, (255, 0, 0), -1)
cv2.imshow('TEST lightglue', frame)
if cv2.waitKey(5) & 0xFF == 27:
    break

cap.release()

@Phil26AT
Copy link
Collaborator

Hi @TuanBao0711, thank you for opening this issue and sorry for the late reply. The issue might be related to PR #92. Can you try the PR and let us know if this fixes your problem?

@Karajan1962
Copy link

Hi @Phil26AT and @TuanBao0711 , thanks for opening this issue, im doing the similar things and i tried PR #92, it still not fixed. here's my error message.

{
"name": "ValueError",
"message": "not enough values to unpack (expected 3, got 2)",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 50
48 start.record()
49 with torch.inference_mode():
---> 50 matches01 = matcher({'image0': feats0, 'image1': feats1})
51 torch.cuda.synchronize()
52 end.record()

File ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []

File ~/LightGlue/lightglue/lightglue.py:465, in LightGlue.forward(self, data)
444 """
445 Match keypoints and descriptors between two images
446
(...)
462 matches: List[[Si x 2]], scores: List[[Si]]
463 """
464 with torch.autocast(enabled=self.conf.mp, device_type="cuda"):
--> 465 return self._forward(data)

File ~/LightGlue/lightglue/lightglue.py:473, in LightGlue._forward(self, data)
471 kpts0, kpts1 = data0["keypoints"], data1["keypoints"]
472 b, m, _ = kpts0.shape
--> 473 b, n, _ = kpts1.shape
474 device = kpts0.device
475 size0, size1 = data0.get("image_size"), data1.get("image_size")

ValueError: not enough values to unpack (expected 3, got 2)"
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants