Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cost of brute-forcing a set of passwords for multiple domains (or key-files) #1

Open
cipriancraciun opened this issue Nov 14, 2024 · 16 comments

Comments

@cipriancraciun
Copy link

Currently, based on the README, if an attacker wants to brute-force a user's password, he has to compute Argon2id for each password try (given by the user as input). (Which is good.)

However, if the attacker has multiple domains in sight, then obtaining actual passwords (outputs from the tool) is almost instantaneous, because from the masterKey onward there are only simple hash functions.

On the same subject, if the attacker has to try out multiple key-files (say he doesn't know which file on the file-system is the actual key-file used), he can do so without incurring an extra Argon2id (or alternative) operations.

(I.e. instead of paying one Argon2id cost for each "input" password try multiplied by each domain, the attacker pays only for the initial Argon2id, then each additional domain is free.)


Another take on this subject is the following type of attack:

  • say the attacker obtains a dump of weakly-hashed passwords from a target site; (something that can be easily checked by brute-force given enough inputs;)
  • say that the owner of the site has alerted its users of the potential breach, and the user has created and updated a new password (i.e. incremented the counter);
  • the attacker can try to guess the correct "input" password, by doing a single Argon2id, and then generating up to say 100 "output" passwords (or even more) and check if anyone of those matches the weakly-hashed password from the dump;
  • if the match is found, the attacker now knows with what counter offset to start poking at the target site;

(I.e. instead of paying for one Argon2id cost for each "input" password try multiplied with each counter try, the attacker pays only for the initial Argon2id, and then each additional count is free.)


A proposed simple solution is to:

  • either hash as much as possible into the Argon2id salt; (including domain, keyfile, counter, and other parameters the user might have used;)
  • either have multiple phases of Argon2id (or alternative);
@samuel-lucas6
Copy link
Owner

On the same subject, if the attacker has to try out multiple key-files (say he doesn't know which file on the file-system is the actual key-file used), he can do so without incurring an extra Argon2id (or alternative) operations.

This isn't a problem assuming high-entropy keyfiles are used because they're computationally infeasible to brute force. That decision was by design since you don't need password hashing for high-entropy secrets (e.g., a 256-bit key stored in a file).

The keyfile could've been passed to Argon2, but the way the YubiKey pepper derivation works (the challenge being dependent on all inputs, including the master password) prevents that being passed to Argon2, and it made sense to have the pepper passed the same way regardless of which approach you use. The Argon2 secret key parameter also isn't available in all implementations.

(I.e. instead of paying one Argon2id cost for each "input" password try multiplied by each domain, the attacker pays only for the initial Argon2id, then each additional domain is free.)

However, this is a very interesting point. It's taking me a while to wrap my head around it this late.

  1. If you're guessing the derived password for one domain, it speeds up attacks if you don't know the counter, length, and character set (because you can quickly compute outputs for those different options).
  2. If you've compromised the master password from one domain, it speeds up attacks against other domains (because of the above plus the same for different domains).
  3. If multiple sites the user uses have been breached, the attacker can search for multiple derived passwords more efficiently (basically the above).

How problematic is this?

  • If you assume the counter, length, and character set are known/predictable, there's little to no advantage attacking one domain.
  • If you assume the list of domains is known/predictable, there's little to no advantage attacking multiple domains.
  • If the master password is compromised and you have no pepper, it's basically game over anyway because that's the only value with real entropy (the only proper secret).

With that said, this does seem unnecessarily bad. The attacker should be slowed as much as possible. It's also got me wondering about whether the peppering has been done in the best way.

I think this is the result of looking at Spectre, focusing on attacks against one domain, and feeling that variable-length salts/salts containing info style parameters are wrong (because I've been thinking about the salt like a random salt in traditional password hashing).

either hash as much as possible into the Argon2id salt; (including domain, keyfile, counter, and other parameters the user might have used;)
either have multiple phases of Argon2id (or alternative);

Multiple Argon2 calls is a bad idea because of the amount of delay/splitting the delay. Putting more into the salt and possibly using the Argon2 secret key parameter for the pepper would be an improvement. Doing the latter would prevent the YubiKey challenge depending on the password though, which seems bad because then you get duplicate challenges.

Anyhow, missing this is rather depressing and embarrassing. I wasn't planning on having a v2 of this program, but it looks like I'm going to need to with this and the Unicode password problem...

@cipriancraciun
Copy link
Author

This isn't a problem assuming high-entropy keyfiles are used because they're computationally infeasible to brute force. That decision was by design since you don't need password hashing for high-entropy secrets (e.g., a 256-bit key stored in a file).

Here I was speaking of another type of brute-force:

  • the attacker knows that the user uses a keyfile;
  • the attacker has a copy of all the user's files; (laptop, cloud, etc.;)
  • the attacker can't "visually" identify (for example by name, extension, folder, etc.) which might be the actual key file used by the user;
  • thus the attacker will iterate through each of the user's files, testing if that would be perhaps the keyfile;

The reasoning is that, if the user uses a keyfile, most likely that file must reside on the file-system (or cloud storage, etc.)

Thus, because there is no Argon2 (or similar) cost for checking each file, the attacker can pre-hash all the user's files once, and instantaneously test every potential key file against a given password (by generating one site "output" password for each key file, and checking that against a weak hash dump).

Or, put another way, in brute-forcing the user's master-password, (say checking it against a weak-hash dump of a site), having a very large list of potential key-files to check adds almost no effort for the attacker as if the user hadn't used a key file at all.

The keyfile could've been passed to Argon2, but the way the YubiKey pepper derivation works [...]

I think that the security design (and its goals) should decide the "flow" of secrets, and in this case (of the keyfile), I think that pragmatic programming decisions impact the overall security.


Another observation on key files attack opportunities: currently you are hashing the key file without any other parameters (like password, identity, domain, etc.), just with the customization string cahir.keyfile, then this hash is used for further derivations as inputs.

An attacker, that has persistent access to a user's laptop or cloud, doesn't need to exfiltrate all his files to check them in a future brute-force attack (as described in the previous paragraph), he can just compute and exfiltrate the hashes. (Which being small, would most likely go undetected by network monitoring or other simple monitoring solutions.)

Instead, if each time a key file would be used, its hashing would have contained some parameters (like the master key), then the attacker would need to exfiltrate the files, and store them until a brute-force is attempted. (Thus putting storage costs on his attacks.)

I've written a bit more on the topic here:


[...] If the master password is compromised and you have no pepper, it's basically game over anyway because that's the only value with real entropy (the only proper secret).

With that said, this does seem unnecessarily bad. The attacker should be slowed as much as possible. It's also got me wondering about whether the peppering has been done in the best way.

As noted previously, I too have an encryption / password generation tool, on which I've been working for the last two years, and I've been thinking a lot on this problem.

I could point you to my derivation code (written in Rust, but pretty readable):

If you want to get in touch and discuss more about this subject, see at the beginning of the https://github.com/volution/z-tokens readme where I point a Discord channel or my email address. (I think it's easier than via GitHub issues.)


Putting more into the salt and possibly using the Argon2 secret key parameter for the pepper would be an improvement. Doing the latter would prevent the YubiKey challenge depending on the password though, which seems bad because then you get duplicate challenges.

If you look at my design, where I've abstracted your YubiKey device as an "oracle" (in fact I'm using ssh-agent as a YubiKey replacement), you'll see that the same "secret" is passed through multiple phases for different purposes.

Simplifying my design (and adapting it for your case) for the purpose of exemplification:

  • the password (and all the other parameters like identity, domain, count, key-file, presence of YubiKey and it's slot, etc.) are "hashed" (as in Blake not Argon) once together into what I call the partial_key in my derive_keys_phase_1; let's call this the "salt";
  • (if YubiKey is used) the "salt" (or a hash based on that) is then used to feed the challenge for the YubiKey; (thus, if the attacker wants to brute-force anything, each time has to pass through the YubiKey, thus adding a physical rate-limiting;) let's call this "response";
  • (if YubiKey is used) the "response" is hashed again with the "salt" to obtain the new "salt"; (so that a malicious YubiKey can't weaken the salt by just returning the same "response" each time;) (you kind of get this in your design, because you concatenate the "pepper" to the "masterKey";)
  • then the "input password" is passed through Argon2 with the "salt" (either original one, or the new one after YubiKey;)

As seen above, special care should be taken to protect against weak or compromised YubiKeys.


Anyhow, missing this is rather depressing and embarrassing. I wasn't planning on having a v2 of this program, but it looks like I'm going to need to with this and the Unicode password problem...

It's not a big problem. There are many subtle details when working on such projects. As said, I've been working on my own tool for the past two years, and I still don't consider this part (the encryption and password-generation based on master-passwords) production ready.


One final note: I haven't looked at how exactly you are normalizing your inputs to the Blake function, but from the specification you seem to just concatenate inputs, based on the idea that either they are fixed in length, or the last one is variable.

I don't quite like this, because it's easy to slip a parameter that breaks the canonicalization aspect.

My approach was to always just hash individual inputs (with Blake and a custom specialization string, i.e. "context"), and then use those in the rest of the code. It makes everything simpler and safer.

(And the cost of many small Blake or other hash computations, pales in comparison to the Argon2 cost.)

@cipriancraciun
Copy link
Author

I think this is the result of looking at Spectre, focusing on attacks against one domain, and feeling that variable-length salts/salts containing info style parameters are wrong (because I've been thinking about the salt like a random salt in traditional password hashing).

I've taken a look at the Spectre(.app) specification, and indeed it suffers from the same issue (of easing the brute-force when the attacker must vary the parameters).

However, at a quick glance I also think they have some issue with the uniformity of the chosen characters. For example: passWord[i] = passChars[ <site key>[i+1] % LEN( passChars ) ]. If the number of characters to be chosen from is not co-prime with 256, I believe there would be a bias in the frequency some characters might appear... Here is for example a recent article I've read about the issue and the proper solution:
=> https://dotat.at/@/2022-04-20-really-divisionless.html

@samuel-lucas6
Copy link
Owner

Or, put another way, in brute-forcing the user's master-password, (say checking it against a weak-hash dump of a site), having a very large list of potential key-files to check adds almost no effort for the attacker as if the user hadn't used a key file at all.

I see what you mean. I guess this again isn't ideal, but it's in a scenario where the attacker has access to the keyfile so security is lost anyway. The recommended way to use keyfiles is to randomly generate them and store them offline on removable storage. The attacker may be able to identify a keyfile based on file accesses, bypassing the need for a search.

Instead, if each time a key file would be used, its hashing would have contained some parameters (like the master key), then the attacker would need to exfiltrate the files, and store them until a brute-force is attempted. (Thus putting storage costs on his attacks.)

Isn't this the opposite of what you were saying before? Because you can compute the Argon2 output once and then never again for different keyfiles.

I like your website formatting.

Simplifying my design (and adapting it for your case) for the purpose of exemplification:

So it sounds like you're doing similar to what I'm doing with the YubiKey except relying on a fast hash of the password/parameters rather than a PBKDF output and then doing Argon2 after the YubiKey challenge-response. It feels a little dodgy hashing the password in any way without a PBKDF.

It's not a big problem. There are many subtle details when working on such projects. As said, I've been working on my own tool for the past two years, and I still don't consider this part (the encryption and password-generation based on master-passwords) production ready.

What's annoying is how this isn't a knowledge issue but something I clearly didn't think about properly not only when designing/implementing the protocol but also when reviewing the protocol and implementation. I was viewing things from a traditional key derivation sense.

I don't quite like this, because it's easy to slip a parameter that breaks the canonicalization aspect.

Yeah I know some people aren't keen on this. In my mind, it's fine as long as you're aware of it or nothing is going to change. It's simpler/more performant than length encoding or repeated hashing.

at a quick glance I also think they have some issue with the uniformity of the chosen characters.

Yep, I'm avoiding that using the Simple Modular Method.

Thanks for the links and reporting this.

@cipriancraciun
Copy link
Author

The recommended way to use keyfiles is to randomly generate them and store them offline on removable storage.

Given that the purpose of a "master-password-based password-generator" tool is to provide you with your passwords at any time you need them, just by knowing the master password, and perhaps having your key-file ready, means that the key-file will most likely be stored on the user's laptop.

At best, it's going to be stored on a removable disk (USB-stick, SDCard, etc.), but while the user is at the computer, most-likely it will be readily accessible via the file-system. Thus an attacker has a large time-window and possible attack-vectors to exfiltrate any such key-file.


Instead, if each time a key file would be used, its hashing would have contained some parameters (like the master key), then the attacker would need to exfiltrate the files, and store them until a brute-force is attempted. (Thus putting storage costs on his attacks.)

Isn't this the opposite of what you were saying before? Because you can compute the Argon2 output once and then never again for different keyfiles.

No, what I'm saying is that the key-file should be used something like this: hash(parameters||keyfile) (where parameters is anything that is derived based on all contextual data, including the password and salt, but obviously with canonization, etc.), as opposed to for example: hash(parameters||hash(keyfile)).

What is the difference: if the algorithm uses the key-file to just compute hash(keyfile) (without any other parametrization that doesn't directly depend on the current inputs), then, if the attacker wants to do an offline brute-force attack, all he has to do is exfiltrate the hash. Furthermore, the attacker can exfiltrate hashes of all the user's files, just in case one of them is perhaps a key-file.

On the other hand, assuming that the hash can't be computed in parallel (as is with Blake3), if the attacker needs to compute hash(parameters||keyfile), then he actually needs to have the entire contents of the keyfile for offline bruteforcing.

In case of Blake3, for example Blake3_keyed(key=Blake3(parameters), data=keyfile) does meet the requirement of having the entire file available, meanwhile Blake3(parameters||keyfile) doesn't.

Also HMAC(key=parameters, data=keyfile) doesn't cut it, because for large files (as per the HMAC specification), it's actually equivalent to HMAC(key=parameters, data=HASH(keyfile)). The same applies for Argon2 and scrypt.


I like your website formatting.

Thanks! :)


So it sounds like you're doing similar to what I'm doing with the YubiKey except relying on a fast hash of the password/parameters rather than a PBKDF output and then doing Argon2 after the YubiKey challenge-response. It feels a little dodgy hashing the password in any way without a PBKDF.

Indeed, the "oracle" (my YubiKey replacement) could note down the queries it receives, and thus it could try to do some brute-forcing based on that.

However, in my particular case, that "partial key" (that contains a hash of the password and other parameters) also depends on the "salt" / "nonce" of the file to be encrypted / decrypted, thus a malicious oracle also has to brute-force the salt / nonce.

On the other hand, now that you mention it, I think I can move the oracle calling after applying the Argon2. However, in that case the Argon2 input isn't influenced by the oracle output... (I'll open an issue on my project on the issue and think about it... Perhaps one solution would be to apply something like bcrypt instead of the fast-hash for the password, thus also having something cache-hard, or perhaps having two rounds of Argon2...)

@samuel-lucas6
Copy link
Owner

At best, it's going to be stored on a removable disk (USB-stick, SDCard, etc.), but while the user is at the computer, most-likely it will be readily accessible via the file-system. Thus an attacker has a large time-window and possible attack-vectors to exfiltrate any such key-file.

Yes, but that's unavoidable. In that scenario, the attacker can probably retrieve secrets from memory, which would also affect YubiKeys.

No, what I'm saying is that the key-file should be used something like this: hash(parameters||keyfile) (where parameters is anything that is derived based on all contextual data, including the password and salt, but obviously with canonization, etc.), as opposed to for example: hash(parameters||hash(keyfile)).

What I mean is if parameters is the Argon2 output, you only need to derive that value once to cache it and then can try lots of different keyfiles without a delay.

What is the difference: if the algorithm uses the key-file to just compute hash(keyfile) (without any other parametrization that doesn't directly depend on the current inputs), then, if the attacker wants to do an offline brute-force attack, all he has to do is exfiltrate the hash. Furthermore, the attacker can exfiltrate hashes of all the user's files, just in case one of them is perhaps a key-file.

I don't think this is a very realistic scenario. If they have a hash of the keyfile, they have the actual secret value and don't need the keyfile itself. The same with having the YubiKey response despite not having the secret key on the YubiKey or if they obtain the secret key before it's passed to the YubiKey. The best you can do is limit the secret to one set of parameters, which is I think what you're getting at.

In case of Blake3, for example Blake3_keyed(key=Blake3(parameters), data=keyfile) does meet the requirement of having the entire file available, meanwhile Blake3(parameters||keyfile) doesn't.

Also HMAC(key=parameters, data=keyfile) doesn't cut it, because for large files (as per the HMAC specification), it's actually equivalent to HMAC(key=parameters, data=HASH(keyfile)).

I don't understand what you mean here. The message for HMAC will always be hashed regardless of the length; it's the key that may not be pre-hashed, which is one reason why there are equivalent keys in HMAC. As for BLAKE3, either way, those values are getting hashed.

However, in my particular case, that "partial key" (that contains a hash of the password and other parameters) also depends on the "salt" / "nonce" of the file to be encrypted / decrypted, thus a malicious oracle also has to brute-force the salt / nonce.

That assumes that value isn't public.

@cipriancraciun
Copy link
Author

What I mean is if parameters is the Argon2 output, you only need to derive that value once to cache it and then can try lots of different keyfiles without a delay.

Indeed, but in my design, the key hash is sandwitched between a fash-hash of the password (and parameters) and the Argon2. Something on the line of:

fast_hash = hash(password with salt and other parameters)
key_hash = hash(fast_hash || key_file_contents)
password_hash = argon2 (password = fast_hash, salt = key_hash)

I don't think this is a very realistic scenario. If they have a hash of the keyfile, they have the actual secret value and don't need the keyfile itself.

Not quite. For example the attacker might have only "cold" access to the user's device. (I.e. when no cryptographic material is in memory, like for example while powered off, in transit, or even unencrypted backups of the user's files).

Thus, if what actually matters in the key derivation is only the hash of the key file, the attacker can easily summarize (current and past) files to be used at a later time in an offline brute-force attempt.


Also HMAC(key=parameters, data=keyfile) doesn't cut it, because for large files (as per the HMAC specification), it's actually equivalent to HMAC(key=parameters, data=HASH(keyfile)).

I don't understand what you mean here. The message for HMAC will always be hashed regardless of the length; it's the key that may not be pre-hashed, which is one reason why there are equivalent keys in HMAC.

You are right in terms of HMAC when the keyfile is used as the text parameter. (I've just checked my own notes.) The issue would have been when the keyfile would have been used as the key parameter, as then as per HMAC specification:

HMAC (H, K, text) -> MAC
MAC = H((K XOR opad) || H((K XOR ipad) || text))

[...] The authentication key K can be of any length up to B, the
block length of the hash function. Applications that use keys longer
than B bytes will first hash the key using H and then use the
resultant L byte string as the actual key to HMAC. In any case the
minimal recommended length for K is L bytes (as the hash output
length). [...]


As for BLAKE3, either way, those values are getting hashed.

The problem with Blake3 is that internally it computes the hash as a Merkle Tree, so that it can be parallelized, and thus if one uses just Blake3_hash(data = context || key_file_contents), given that the context isn't known by the attacker but will change on each brute-force try, and assuming it's of constant size, the attacker can still compute the internal state of all the Merkle tree nodes except the first one (which I believe requires log2 storage space based on the key file length, because only the most left-hand branch can't be computed before-hand) offset by the size of context, and also store the first bytes of the key-file as required to fill the first node.

This however doesn't work if one uses Blake3_keyed_hash(key = context, data = key_file), because then each individual Merkle Tree node also depends on the context (which is unknown), thus the attacker can't actually "compress" candidate key-files.

@hakavlad
Copy link

hakavlad commented Nov 22, 2024

assuming high-entropy keyfiles

Keyfiles may have less entropy than passwords. You cannot force a user not to use keyfiles that contain only one byte repeated multiple times.

The execution time of three rounds of Argon2 is practically no different from not running Argon2 at all. Thus, the user will hardly notice any difference. A difference of 1 second is negligible. See also this discussion.

Forensics does not necessarily allow one to determine which file was used as the key file: devices can be mounted with the noatime option on Linux.

These are arguments in favor of using Argon2 to protect key files:

  • It is virtually imperceptible to the user.
  • The entropy of the keyfile is not always high.
  • If an adversary has access to the file system, protecting the key file with Argon2 will slow down the search for the keyfile.

I also believe that keyfiles should be hashed with a salt, as this slows down reverse brute-force attacks (this is more likely to apply to Kryptor).

@hakavlad
Copy link

it's in a scenario where the attacker has access to the keyfile so security is lost anyway

Access to the file system containing the key file, or even access to the key file itself, does not automatically compromise the encrypted file. The adversary has yet to guess the password.

@hakavlad
Copy link

Ideally, the key file and password are hashed with salt, and then protected together using Argon2. This gives the greatest security, and there is no good reason to abandon such a scheme.

@samuel-lucas6
Copy link
Owner

I will certainly re-evaluate how to process the keyfile in the future. It probably is best passed to Argon2, but the libsodium implementation unfortunately doesn't support a pepper/secret key parameter, which makes it messy. Most password hashing algorithms don't support such a parameter.

Keyfiles may have less entropy than passwords. You cannot force a user not to use keyfiles that contain only one byte repeated multiple times.

Yes, but that's user error. Like you said in a Kryptor issue/discussion, you can't completely prevent the user ruining their security. A weak keyfile is essentially pointless.

The execution time of three rounds of Argon2 is practically no different from not running Argon2 at all. Thus, the user will hardly notice any difference. A difference of 1 second is negligible.

With what memory size? That's not true at all for large enough memory sizes. This also heavily depends on the implementation, like Monocypher can be significantly slower than libsodium's implementation depending on the machine/parameters.

And 1 second is a very noticeable delay. Plenty of software uses less delay than that. You also don't want to be running Argon2 multiple times if that's what you're talking about.

@hakavlad
Copy link

hakavlad commented Nov 23, 2024

libsodium implementation unfortunately doesn't support a pepper/secret key parameter

Why do you need this? What about the follow:

argon2_tag = argon2(salted_keyfile_digest || salted_passphrase_digest, 256 MiB * 3 passes)

but that's user error

you can't completely prevent the user ruining their security

But the tool can mitigate this using Argon2.

A weak keyfile is essentially pointless.

A weak keyfile enhances the overall protection by the amount of its entropy, especially if it is fed to the Argon2 input rather than mixed with its output.

With what memory size?

256 MiB

Monocypher can be significantly slower than libsodium

I mean 256 MiB x 3 with libsodium. It's about 0.5s on modern hardware.

And 1 second is a very noticeable delay

OK, it's noticeable, but not significant.

You also don't want to be running Argon2 multiple times

Yes.

@samuel-lucas6
Copy link
Owner

Why do you need this? What about the follow:

It's the cleanest approach, and the salt is limited to 128 bits in libsodium. Are you proposing on feeding those in as the password?

A weak keyfile enhances the overall protection by the amount of its entropy, especially if it is fed to the Argon2 input rather than mixed with its output.

Yes, and every little helps. However, if it's really weak, Argon2 won't save you. But I'm not disputing that it should go into the PBKDF.

I mean 256 MiB x 3 with libsodium. It's about 0.5s on modern hardware.

On my M1 MacBook, it's 3 seconds with Monocypher. This program isn't using libsodium because the vcruntime requirement on Windows is a pain with a portable application.

OK, it's noticeable, but not significant.

It's acceptable for this type of program but not in every scenario.

@hakavlad
Copy link

the salt is limited to 128 bits in libsodium. Are you proposing on feeding those in as the password?

The Argon2 password has a 512-bit space (based on BLAKE2b), and I mean:

argon2_password = keyfile_digest || password_digest || any_other_secrets

argon2_tag = argon2(argon2_password, salt)

I suggest that all secret values be included as part of the Argon2 password.

@samuel-lucas6
Copy link
Owner

The Argon2 password has a 512-bit space (based on BLAKE2b), and I mean:

The password can be much longer than that.

I suggest that all secret values be included as part of the Argon2 password.

The issue with this is domain separation, and it's ugly doing concatenation compared to an incremental hash function API. You ideally don't want to be prehashing the password either.

With Monocypher, it makes sense to use the key parameter. The only issue is that the protocol then can't be implemented using libsodium, but most Argon2 implementations should be offering the key parameter since it's in the RFC.

@hakavlad
Copy link

hakavlad commented Nov 27, 2024

The password can be much longer than that.

  1. Yes, but the key space is limited to 512 bits.
  2. Yes, and this fact can be used to feed multiple concatenated values as an Argon2 password.

The issue with this is domain separation

You can utilize a personalization string when hashing secrets.

it's ugly doing concatenation compared to an incremental hash function API

Not ugly, just not graceful. An example of what's really ugly is how VeraCrypt handles keyfiles: https://www.veracrypt.fr/en/Keyfiles.html

the libsodium implementation unfortunately doesn't support a pepper/secret key parameter

I suggested how to circumvent this restriction, and this solution works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants