This suggester isn’t very good. It takes a single query and suggests indexes for it. A good one would take a mix of queries and suggest a set of indexes, also considering the impact on write speed of additional indexes (table updates often need to update indexes, too)
For the example in this article, if the table is large and the average number of rows with a given ‘a’ value is close to 1 or if most queries are for ‘a’ values that aren’t in the database, it may even be better to do
CREATE INDEX x1a ON x1(a);
Since we wrote our initial index suggestion tool for Postgres, we actually went back to the drawing board, examined the concerns brought up, and developed a new per-table Index Advisor for Postgres that we recently released .
The gist of it: Instead of looking at the "perfect" index for each query, its important to test out different "good enough" indexes that cover multiple queries. Additionally, as you note, the write overhead of indexes needs to be considered (both from a table writes / second approach, as well as disk space used at a given moment in time).
I think this is a fascinating field and there is lots more work to be done. I've also found the 2020 paper "Experimental Evaluation of Index Selection Algorithms"  pretty useful, that compares a few different approaches.
The underlying API can analyse multiple queries - looks like they've only coded up the test `.expert` command for one.
From , "The sqlite3expert object is configured with one or more SQL statements by making one or more calls to sqlite3_expert_sql(). Each call may specify a single SQL statement, or multiple statements separated by semi-colons." then "sqlite3_expert_analyze() is called to run the analysis."
This is my pet peeve with SQL Server SSMS will give you a missing index suggestion and cost... the problem is inexperienced people will take the suggestion as is and create way too many highly specialized indexes over time.
.. via https://github.com/globalcitizen/taoup
Integer "age" has many repeats, but random floats are unique. That, or random ints might be from a large pool, again not many repeats.
I've sometimes wondered why server-based RDBMSs don't offer something like this. Is it too hard to implement? Or did people just not think of it? Or do they have something like this and I just never learned about it?
I don't know if it's still around, but in mid-2000s it was light years ahead of any other database.
It is able to do so without harming production performance.