Have you experienced performance bottlenecks above some thresholds?

Hi, I'm evaluating OTRS's performance for a ~large scale implementation and I'm wondering if anyone ran into ~steep performance bottlenecks after hitting some threshold for a metric such as total # of tickets, db size, attachment size, etc. Please share if you did. I ask because I need to formulate a promise towards a potential customer about how long his implementation's performance will scale in terms of some metrics usable to approximate time intervals (something like: if you have 500 new tickets per month and about 30% open tickets then this implementation will very likely scale for ~3 years). Looking around the docs / Internet I keep hitting a reference for the need to switch the "backend module for the ticket index" from RuntimeDB to StaticDB when the db has ~60000 tickets or 6000 open tickets ( http://doc.otrs.org/3.1/en/html/performance-tuning.html). However there is no technical explanation for how these numbers came about. Does it depend on hardware or is it a threshold beyond which performance doesn't scale anymore regardless on how much hardware you throw at it? Another possible issue would be db size. With this one I saw no clear cut numbers but various mailing list posts led me to believe performance would scale, with adequate hardware, for dbs up to 200 GB. My case will require a relatively light workload at first (~100 tickets / day) but these will be tickets with large photo attachments so I may have to offload attachments from the database sooner than I imagine. Thanks, Bogdan

While I, personally, have no such implementation under my belt, if you feel
you're going to reach that limit in a hurry, you won't hurt yourself by
deploying with StaticDB and Article FileSystem Attachment storage.
StaticDB caveat is simply that it may "lag" versus real time. (This is
*generally* only important for searching tickets.)
Article FileSystem Attachment storage is a better thing if you're expecting
large/many attachments and don't want them in your database. This can help
maintain a sane database size. Just make sure you have a good sized storage
location and back up the file system as part of your backup.
On Mon, Nov 12, 2012 at 7:31 AM, Bogdan Iosif
Hi,
I'm evaluating OTRS's performance for a ~large scale implementation and I'm wondering if anyone ran into ~steep performance bottlenecks after hitting some threshold for a metric such as total # of tickets, db size, attachment size, etc. Please share if you did.
I ask because I need to formulate a promise towards a potential customer about how long his implementation's performance will scale in terms of some metrics usable to approximate time intervals (something like: if you have 500 new tickets per month and about 30% open tickets then this implementation will very likely scale for ~3 years).
Looking around the docs / Internet I keep hitting a reference for the need to switch the "backend module for the ticket index" from RuntimeDB to StaticDB when the db has ~60000 tickets or 6000 open tickets ( http://doc.otrs.org/3.1/en/html/performance-tuning.html). However there is no technical explanation for how these numbers came about. Does it depend on hardware or is it a threshold beyond which performance doesn't scale anymore regardless on how much hardware you throw at it?
Another possible issue would be db size. With this one I saw no clear cut numbers but various mailing list posts led me to believe performance would scale, with adequate hardware, for dbs up to 200 GB.
My case will require a relatively light workload at first (~100 tickets / day) but these will be tickets with large photo attachments so I may have to offload attachments from the database sooner than I imagine.
Thanks, Bogdan
--------------------------------------------------------------------- OTRS mailing list: otrs - Webpage: http://otrs.org/ Archive: http://lists.otrs.org/pipermail/otrs To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
participants (2)
-
Bogdan Iosif
-
Gerald Young