-
-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic completion candidates update? #471
Comments
+1 |
Yes, it is on the todo list. |
I implemented this deoplete source to help dealing with 'tags' slowness in the meantime: It is pretty dumb, but works for me. @Shougo if you want you can improve on it and put it into deoplete since it's generic enough. |
I will work for the issue after startup optimization. |
It should be ver.4.0. |
It is the next task after Vim 8 support. |
@Shougo Hi, I'd like to clarify things about I've read the docs here (https://github.com/Shougo/deoplete.nvim/blob/master/doc/deoplete.txt#L980) and https://github.com/zchee/deoplete-jedi/blob/master/rplugin/python3/deoplete/sources/deoplete_jedi.py#L168 source. To do what I want I need to implement some kind of internal worker thread. When Next time when
Also, even though the word Is my understanding correct? Please correct me if I'm wrong. |
It looks like what I need is the same Is it possible to make Is there another way? If I need to apply a custom filter and that filter is known to be slow, what do I do? I would like to be able to cancel filtering requests if the input ( |
Apparently VSCode has something very close to my needs, specifically Promises and Cancellation Tokens: https://code.visualstudio.com/docs/extensionAPI/patterns-and-principles#_cancellation-tokens |
Could you please explain, how and when to use this flag? The documentation is a bit scarce...
|
Yes. |
You don't have to use async coroutines. |
If the flag is enabled, deoplete does not use the internal cache. |
I don't understand what is the your filter.
I don't understand it. |
It should be new feature. |
How are you going to implement dynamic update them? I'm curious. A callback to update the results?
It's basically the same fuzzy filtering & ranking in one place but done using a Python C extension. Nothing fancy though.
Hm, not exactly. Suppose that I'm going against the suggestions and I'm doing my own filtering in In my case, to be able to cancel time-consuming filtering in the middle (for example, if the input has changed) and to not block deoplete, I had to do the following:
I should probably think more about the use cases. Now it looks like just detecting a change in |
The solution is multi-process. async coroutines does not save the calculation.
Hm. I can add |
Yes. It is the feature.
Right. |
The current deoplete behavior is hard to cancel the results for deoplete sources. The input change should be detected by |
That would be useful! Could you please add it? They say they're using multithreading to parallelise matching so it sounds promising. It could be faster than my current solution.
On top of that any slow source can slow down the whole deoplete plugin. Having sources in separate processes could solve this issue probably. Another improvement could be to call all sources in parallel (this is what nvim-completion-manager claims to do). |
I have the similar solutions. But it takes time to implement. Not easy. |
@balta2ar Please use |
@balta2ar I have added |
Sorry, I don't quite understand it, why do I need a flag to detect input change? The source plugin can remember last input string and compare with the new one, thus detecting the change by itself. Is there something else to that? Also, using the same flag both as input and output for different purposes ("input changed" versus "call me again soon") could be very confusing... Thank you for the cpsm, I'll try it! |
Yes. But all async source needs the feature.
OK. I have changed the flag. |
Could you give an example, please? I have an async source and I remember the previous input and compare it with the current one, and it's working fine. I may be missing something.
May I suggest |
Frankly, I don't know the exact server configuration, but it is definitely not windows. The above hardware specs is what I see inside the VM, i.e. the provisioned resources. |
Latest master (3dee4b1) with profile settings: |
FWIW - the behavior is no any different on the latest nightly build of neovim. |
Is there any log that I can turn on that would help to debug ":qa!" issue? |
I think in VM timer interrupt is very slow. I cannot fix the problem. |
I think it is same problem with #637. |
Please test the latest version of deoplete in non VM environment. |
I have checked the log. |
Even if it is faster, it's of no use to me. Sorry, but my work environment is not going to change for some objective reasons. I think I will just stay on the older version for now, which mostly works for me. Thank you! |
I know it. But I want to know the slowness is whether VM reason. |
And I will implement searial completion feature instead. |
Still, I can't do a fair apples to apples comparison because I don't have a CentOS running on bare metal. The only unix that I have on bare metal is MacOS, which is pretty far from CentOS. So even if I get different results, it could be something else. |
Yes. But you know CentOS bare metal is different from CentOS VM. |
@mrbiggfoot Please test below patch and profile. diff --git a/rplugin/python3/deoplete/child.py b/rplugin/python3/deoplete/child.py
index 1e14e09..f171be9 100644
--- a/rplugin/python3/deoplete/child.py
+++ b/rplugin/python3/deoplete/child.py
@@ -79,6 +79,7 @@ class Child(logger.LoggingMixin):
self._on_event(args[0])
elif name == 'merge_results':
self._write(self._merge_results(args[0], queue_id))
+ self.debug('main_loop: end')
def _write(self, expr):
sys.stdout.buffer.write(self._packer.pack(expr))
@@ -141,6 +142,7 @@ class Child(logger.LoggingMixin):
self.debug('Loaded Filter: %s (%s)', f.name, path)
def _merge_results(self, context, queue_id):
+ self.debug('merged_results: begin')
results = self._gather_results(context)
merged_results = []
@@ -162,6 +164,7 @@ class Child(logger.LoggingMixin):
is_async = len([x for x in results if x['context']['is_async']]) > 0
+ self.debug('merged_results: end')
return {
'queue_id': queue_id,
'is_async': is_async, |
And please test the patch. diff --git a/rplugin/python3/deoplete/process.py b/rplugin/python3/deoplete/process.py
index d1ac7b3..f6e6c47 100644
--- a/rplugin/python3/deoplete/process.py
+++ b/rplugin/python3/deoplete/process.py
@@ -52,7 +52,10 @@ class Process(object):
def enqueue_output(self):
while self._proc:
- b = self._proc.stdout.read(1)
+ b = self._proc.stdout.raw.read(102400)
+ if b == b'':
+ return
+
self._unpacker.feed(b)
for child_out in self._unpacker:
self._queue_out.put(child_out) |
Hi @Shougo I think I'm also experiencing some issues now, after updating to the last version. Also, It feels like my system is slower. I'm trying to debug, following the docs:
But now I'm retrieving an error when opening nvim:
I would like to provide more useful information, coud you point me out how can I help? Thanks for the great work! |
@brunogsa Please test the latest version of deoplete. |
OK. Cherrs. |
Sorry for spamming here, but don't yet want to open an issue for what may well be my own stupidity... the filesource https://github.com/mrbiggfoot/deoplete-filesrc has stopped working. It is not related to this change - it does not work even if I roll it back. But it used to work some time ago. Were there any changes to the source API? I'm not seeing anything relevant in the doc... perhaps you can take a quick look and spot what is wrong immediately? |
OK. I will test it. |
I have tested it. And it works. I think you must let g:deoplete#filesrc#path = expand('~/src/neovim/completions') |
You can use |
Thanks for looking into it @Shougo, but that was not it. I have an absolute path set in that variable. There's something more weird going on... if I open a file like "vim file.cc", my completions from the file work. But if I open that same file via fzf (which I use from vim via fzf.vim) after staring vim, the same completions don't work! The |
close() is needed. You should use You can use |
The code is broken. def on_event(self, context):
if not exists(self.__filepath):
self.__cache = []
return
mtime = getmtime(self.__filepath)
if mtime == self.__mtime:
return
self.__mtime = mtime
f = open(self.__filepath, 'r')
self.__cache = list(f.read().split()) |
The cache is cleared if So,
Yes. It is your source's bug. |
Thank you again @Shougo, you are right. Stupid mistakes in my code.
Not quite, readlines() does not trim the newline chars. |
Different sources have different speeds. Some are very fast - like "buffer" and "around", some may take many seconds to gather candidates from, e.g. "tag" source. Can we show the completion candidates as they become avaliable from the source? I.e. when I start typing, I should instantly get completions from "buffer" and "around", and if I'm willing to wait for so many seconds - the candidates from "tag" would appear and the candidates list will be dynamically redrawn.
The text was updated successfully, but these errors were encountered: