Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does xxhr using thread pool or std::async? #13

Open
portsip opened this issue Oct 16, 2021 · 7 comments
Open

Does xxhr using thread pool or std::async? #13

portsip opened this issue Oct 16, 2021 · 7 comments

Comments

@portsip
Copy link

portsip commented Oct 16, 2021

Hi, does the xxhr is using the thread pool or std::async? Since the CPR used the std::async to perform the POST and GET to support asynchronous API, it will cause massive threads if there have a lot of requests performed.

Thanks

@daminetreg
Copy link
Member

daminetreg commented Oct 16, 2021

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

@portsip
Copy link
Author

portsip commented Oct 16, 2021

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

Got it, thanks, this is really better than CPR.

@portsip
Copy link
Author

portsip commented Oct 16, 2021

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h?
I think it's http://aantron.github.io/better-enums/, right?

@portsip
Copy link
Author

portsip commented Oct 16, 2021

Dear @portsip xxhr doesn’t use any threads it relies on a Boost Asio io_context to schedule execution of requests, hence letting the asynchronous io mechanisms of the host OS come into play.

In an upcoming update it will be possible to provide your own io_context/io_service to xxhr so that you can then decide to io_context.run() in the number of threads you want.

But as of today it actually uses just 1 thread.m: the calling thread.

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

@daminetreg
Copy link
Member

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?

It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin

Then to build locally just run : tipi . -t linux-cxx17 or tipi . -t windows-cxx17 or tipi . -t macos-cxx17:

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.

The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the io_service to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).

@portsip
Copy link
Author

portsip commented Oct 16, 2021

I‘ve tried to compile the example but failed since there is no enum.h file included, where can I get the enum.h? I think it's http://aantron.github.io/better-enums/, right?

It is meant to be compiled with https://tipi.build you can get it by following the onboarding instruction behind https://tipi.build/signin

Then to build locally just run : tipi . -t linux-cxx17 or tipi . -t windows-cxx17 or tipi . -t macos-cxx17:

Sorry, does xxhr is really an asynchronous library? As my tested, if the destination host is down or can't be reachable, the xxhr::GET will be blocked until return an error.

It is IO free the application from any active wait or so, the OS come back in the provided on_response handler when it is so far.

The fact that the GET example block is because of the io.run() called within GET; in an upcoming update it will be possible to optionally pass the io_service to GET and then the lib won't do the call to run() itself ( it's a trivial change but this isn't there yet ).

Thanks, may I know when will the next update be ready?

BR

@daminetreg
Copy link
Member

Sure, should be at the latest in November, I'll mention you on the update. In the meantime you can wrap the calls in your own std::thread call from there on it will be fully async.

As well on Webassembly targets there we don't use boost asio but the underlying XMLHttpRequest API (in the future should be fetch API), hence not blocking browser or the node js main loop through C++ webrequests.t

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants