When does g_hash_table_lookup() starts to be convenient compared to a loop of strcmp()?

Hash tables are definitely the way to go when data must be accessed from a large collection. But if the collection is very small (e.g. two strings), a simple if/else will be way faster.

Now I was wondering what I should do in my case. I have a collection of short strings (a dozen of characters in average) that depend on the user. In theory there could be 100 of them, but most of the times there will be around 2-5 strings in total. By most of the times I mean 99.9% of the times.

So I was wondering if anyone has thought about when using GHashTable starts to be convenient compared to a loop of strcmp(). Does it start to be convenient after 5 strings? After 20? After 100?

It is probably just a futile curiosity. Yet I am curious about it (also for future designs).

General answer for performance-related matters:

Something like 20% of the code is responsible for 80% of time execution.

So it’s a good practice to write code such that it’s easy to understand, and optimize the 20% only if needed (by doing measurements).

100% agree. I still have the academic curiosity though about when GHashTable starts to be faster than a plain if/else scrolling through a list. My bet goes to “around ten strings”. I guess I will do a test at some point…

Yeah it’s like the sorting algorithms. Some smart ways to sort are slower for small lists.

This topic was automatically closed 45 days after the last reply. New replies are no longer allowed.