Memory consumption of the random insert benchmark.
Memory consumption of the random insert benchmark.
Memory consumption of the random insert benchmark.
Before the test, n elements are inserted in the same way as in the random full inserts test. Each key-value pair is look up in a different and random order than the one they were inserted.
Before the test, n elements are inserted in the same way as in the random full inserts test. Then a another vector of n random elements different from the inserted elements is generated which is tried to search in the hash map.
Before the test, n elements in the same way as in the random full insert test are added. Each key is deleted one by one in a different and random order than the one they were inserted.
Before the test, a vector with n random values in the whole range of the integer size is generated. Then for each value k in the vector, the key-value pair (k, 1) is inserted into the hash map.
Before the test, n elements are inserted in the same way as in the random full inserts test. Then the hash map iterators is used to read all the key-value pairs.
Before the test, n elements are inserted in the same way as in the random full inserts test before deleting half of these values randomly. Then all the original values are tried to read in a different order, which will lead to 50% hits and 50% misses.
Same as the random full inserts test but the reserve method of the hash map is called beforehand to avoid any rehash during the insertion. It provides a fair comparison even if the growth factor of each hash map is different.
Before the test, a vector with the values [0, n) is generated and shuffled. Then for each value k in the vector, the key-value pair (k, 1) is inserted into the hash map.
Before the test, n elements are inserted in the same way as in the random shuffle inserts test. Each key-value pair is look up in a different and random order than the one they were inserted.
Before the test, a vector with n random values is generated, but only n/2 elements are inserted. Then the full vector is shuffled and randomly processed where 50% reads, 25% inserts, 25% deletes operations are executed (successful vs unsuccessful rate 50/50). That benchmark seems to be the closest to reality.
Before the test, n elements are inserted in the same way as in the random full inserts test. Each key-value pair is look up in a different and random order than the one they were inserted.
Before the test, n elements are inserted in the same way as in the random full inserts test. Then a another vector of n random elements different from the inserted elements is generated which is tried to search in the hash map.
Before the test, n elements in the same way as in the random full insert test are added. Each key is deleted one by one in a different and random order than the one they were inserted.
Before the test, a vector with n random values in the whole range of the integer size is generated. Then for each value k in the vector, the key-value pair (k, 1) is inserted into the hash map.
Before the test, n elements are inserted in the same way as in the random full inserts test. Then the hash map iterators is used to read all the key-value pairs.
Before the test, n elements are inserted in the same way as in the random full inserts test before deleting half of these values randomly. Then all the original values are tried to read in a different order, which will lead to 50% hits and 50% misses.
Same as the random full inserts test but the reserve method of the hash map is called beforehand to avoid any rehash during the insertion. It provides a fair comparison even if the growth factor of each hash map is different.
Before the test, a vector with the values [0, n) is generated and shuffled. Then for each value k in the vector, the key-value pair (k, 1) is inserted into the hash map.
Before the test, n elements are inserted in the same way as in the random shuffle inserts test. Each key-value pair is look up in a different and random order than the one they were inserted.
Before the test, a vector with n random values is generated, but only n/2 elements are inserted. Then the full vector is shuffled and randomly processed where 50% reads, 25% inserts, 25% deletes operations are executed (successful vs unsuccessful rate 50/50). That benchmark seems to be the closest to reality.
Same as the UUID random inserts test but the reserve method of the hash map is called beforehand to avoid any rehash during the insertion. It provides a fair comparison even if the growth factor of each hash map is different.
Before the test, n elements in the same way as in the random full insert test are added. Each key is deleted one by one in a different and random order than the one they were inserted.
Before the test, n elements are inserted in the same way as in the random full inserts test. Then the hash map iterators is used to read all the key-value pairs.
Before the test, n elements are inserted in the same way as in the random full inserts test before deleting half of these values randomly. Then all the original values are tried to read in a different order, which will lead to 50% hits and 50% misses.
Before the test, a vector with n random UUIDs is generated. Then for each value k in the vector, the key-value pair (k, 1) is inserted into the hash map.
Before the test, n elements are inserted in the same way as in the random UUID insert test. Read each key-value pair is look up in a different and random order than the one they were inserted.
Before the test, n elements are inserted in the same way as in the random UUID insert test. Then a another vector of n random elements different from the inserted elements is generated which is tried to search in the hash map.
Before the test, a vector with n random values is generated, but only n/2 elements are inserted. Then the full vector is shuffled and randomly processed where 50% reads, 25% inserts, 25% deletes operations are executed (successful vs unsuccessful rate 50/50). That benchmark seems to be the closest to reality.