Password hashing is one of those topics where the right answer changed, and not everyone got the memo. MD5 and SHA-1 are completely wrong β no key derivation function behaviour, no work factor, trivially fast on GPU. bcrypt was the right answer for about 15 years. scrypt improved on bcrypt. Argon2id won the Password Hashing Competition in 2015 and is the current consensus recommendation from OWASP, NIST, and most security-focused projects.
If you're building a new system: use Argon2id. If you have a live system on bcrypt: stay on bcrypt for now, migrate to Argon2id on next login over time. The performance difference doesn't justify a forced reset. What follows explains why each algorithm works, what its parameters control, and how to tune Argon2id for your hardware.
Why password hashing is different from regular hashing
A password hash has one goal that a regular hash doesn't: it must be slow to compute. SHA-256 is designed to be fast β you can hash billions of passwords per second on modern hardware, which is great for data integrity checks and terrible for password storage. An attacker with a leaked database and a GPU farm can attempt billions of password guesses per second against a SHA-256 hash.
Password hashing functions (properly called key derivation functions or KDFs in this context) add a work factor β a configurable cost that makes each hash computation take a minimum amount of time. The work factor can be increased as hardware gets faster, keeping the computation expensive over time. They also use salts β random per-password values that ensure two users with the same password have different hashes, defeating precomputed rainbow tables.
bcrypt
bcrypt was designed in 1999 by Niels Provos and David MaziΓ¨res. It uses the Blowfish cipher's key schedule as its core operation, which is memory-intensive relative to the hardware of the time. Its single parameter is a cost factor (typically 10β12 for interactive logins) that doubles the work per increment β cost 11 is twice as expensive as cost 10.
bcrypt's limitation is that its salt is fixed at 128 bits and its output at 192 bits. More importantly, it was designed before GPU password cracking became a serious concern. Modern GPUs can run bcrypt hashes efficiently despite the cost, because the memory requirement (4KB) fits comfortably in GPU registers. It's not broken β bcrypt at cost 12 or 13 is still a reasonable choice for existing systems β but it's not optimal for new systems.
bcrypt also has a subtle maximum input length of 72 bytes. Passwords longer than 72 characters are silently truncated by most implementations. This is usually mitigated by pre-hashing with SHA-256 before bcrypt (a pattern used by some frameworks), but it's a footgun for anyone who doesn't know about it.
scrypt
Colin Percival designed scrypt in 2009 specifically to be resistant to hardware-accelerated attacks. scrypt's key insight is that memory bandwidth is the bottleneck for GPU and ASIC attacks β unlike CPU operations, you can't easily parallelise across thousands of GPU cores when each operation requires a large random-access memory footprint.
scrypt has three parameters:
Nβ CPU/memory cost (must be a power of 2). Controls total memory usage.rβ block size. Increasingrincreases memory and bandwidth requirements.pβ parallelisation factor. Controls how many independent operations run in parallel.
The memory requirement is approximately 128 Γ N Γ r bytes. At the common setting of N=16384, r=8, p=1, scrypt uses around 16MB per hash. This makes GPU attacks significantly more expensive than bcrypt. scrypt is a solid choice and is used by several major projects (including Litecoin's proof-of-work function). Its complexity is its main liability β the three-parameter tuning is harder to reason about than Argon2id's more explicit parameters.
Argon2id
Argon2 was designed by Alex Biryukov, Daniel Dinu, and Dmitry Khovratovich and won the Password Hashing Competition in 2015 after a multi-year evaluation. It comes in three variants:
- Argon2d: Data-dependent memory access β maximally resistant to GPU attacks but vulnerable to side-channel timing attacks. Not suitable for password hashing where the attacker might observe timing.
- Argon2i: Data-independent memory access β resistant to side-channel attacks but slightly less resistant to GPU attacks due to its more predictable access pattern.
- Argon2id: Hybrid β first half of execution uses Argon2i (data-independent), second half uses Argon2d (data-dependent). Recommended for password hashing as it resists both side-channel attacks and GPU attacks.
Argon2id has three parameters:
Kilobytes of RAM per hash
This is the primary defence against GPU attacks. Higher memory requirement = fewer parallel GPU threads. OWASP minimum: 19 MiB (19456 KiB). Production recommendation: 64 MiB for interactive logins, 256β512 MiB for high-value operations.
Number of iterations
Number of passes over the memory. Increasing t linearly increases both time and resistance to attacks that use less memory. OWASP minimum: 2 iterations. Increasing t is cheaper than increasing m for CPU, but both are valid approaches.
Degree of parallel processing
Number of threads to use. Should match your server's available threads β typically 1β4 for web applications. This is not a security parameter in the same way as m and t; it primarily affects throughput.
Random per-password value
Minimum 16 bytes, typically 32 bytes. Should be generated by a cryptographically secure random number generator. Libraries handle this automatically β don't generate salts manually.
How to choose parameters
The goal is to make each hash take a specific amount of wall-clock time on your production hardware. OWASP recommends 1 second for interactive logins where users are waiting, and up to several seconds for offline or high-value operations. The exact milliseconds matter less than the hardware it runs on β tune on your actual deployment hardware.
Benchmark on your hardware, not a developer laptop:
# Python benchmark using argon2-cffi
import time
from argon2 import PasswordHasher
configs = [
# (memory_kb, time_cost, parallelism)
(65536, 2, 1), # 64 MiB / 2 iterations / 1 thread - OWASP profile 1
(65536, 3, 4), # 64 MiB / 3 iterations / 4 threads - OWASP profile 2
(262144, 2, 4), # 256 MiB / 2 iterations / 4 threads - higher security
]
password = b"benchmark-password"
for m, t, p in configs:
ph = PasswordHasher(memory_cost=m, time_cost=t, parallelism=p, hash_len=32, salt_len=16)
times = []
for _ in range(10):
start = time.perf_counter()
ph.hash(password)
times.append(time.perf_counter() - start)
avg_ms = (sum(times) / len(times)) * 1000
print(f"m={m//1024}MiB t={t} p={p}: {avg_ms:.0f}ms avg")
For a typical web server handling interactive password verification, aim for 200β500ms. For a high-security service where you can tolerate longer wait times (admin accounts, sudo-equivalent operations), 500msβ2 seconds is reasonable. Adjust m first β higher memory is the primary GPU defence β then adjust t to hit your target time.
Don't tune on load-bearing production traffic. Run the benchmark on a representative server instance with realistic concurrent load. A hash that takes 300ms when the server is idle might take 2 seconds under peak concurrent login load if you're using all available threads. Test with concurrent load, not sequential benchmarks.
Side-by-side comparison
| Property | bcrypt | scrypt | Argon2id |
|---|---|---|---|
| Standard status | Widely deployed | Widely deployed | PHC winner / OWASP recommended |
| Memory hardness | 4 KB (poor) | Configurable, good | Configurable, excellent |
| GPU resistance | Moderate | Good | Excellent |
| Side-channel resistance | Good | Good | Excellent (id variant) |
| Max password length | 72 bytes | Unlimited | Unlimited |
| Parameter tuning | Simple (1 param) | Complex (3 params) | Clear (3 params) |
| Library support | Excellent | Good | Good, growing |
| New systems | Not recommended | Acceptable | Recommended |
PassVault uses Argon2id
PassVault uses Argon2id for master password key derivation β the master password is never stored or transmitted; it's stretched into the encryption key that opens the vault. The parameters are set at the high end of OWASP's recommendations (256 MiB memory, 3 iterations, 4 threads) because this operation happens once per session, not on every request, and the security value is high enough to justify a longer wait.
On a mid-range server, this produces a β600ms derivation time β visible to the user as the brief delay when unlocking the vault. This delay is intentional and appropriate: it makes offline brute-force attacks against an exported vault file extremely expensive, even with modern GPU clusters.