
* Henry Combrinck
It does appear to be using the index "article_pkey" (sa.id) unless I'm reading the above incorrectly...
Well, no, it's doing sequential scans ("Seq Scan") on both article and ticket, and you really want to avoid that. Here's what it would look like (modulo other local configuration issues that might influence the plan layout) if the indexes were used: Sort (cost=10.70..10.72 rows=7 width=26) Sort Key: sa.create_time -> Nested Loop (cost=4.31..10.60 rows=7 width=26) -> Index Scan using ticket_pkey on ticket st (cost=0.00..3.07 rows=1 width=8) Index Cond: (id = 3604) -> Merge Join (cost=4.31..7.46 rows=7 width=26) Merge Cond: ("outer".id = "inner".article_sender_type_id) -> Index Scan using article_sender_type_pkey on article_sender_type sdt (cost=0.00..3.03 rows=3 width=14) -> Sort (cost=4.31..4.33 rows=7 width=18) Sort Key: sa.article_sender_type_id -> Index Scan using article_ticket_id on article sa (cost=0.00..4.22 rows=7 width=18) Index Cond: (ticket_id = 3604) As you can se, even Postgres' theoretical "cost" is several orders of magnitude lower. (This is with 7k/28k tickets/articles however, it seems you've got a tad more tickets in your database.)
I'll give the other link (bigint-int) a try and see, but I doubt this will have a major impact on the problem.
I'll be surprised if you don't see a noticeable improvement, but there might of course be other issues as well. Remember to tell postgres to "analyze". (Why this isn't done automatically is beyond me.) Arne.