
Hi , We are looking at ways to archive our existing OTRS data to ease the load on our production server. We are using the ArticleStorageFS method of storing emails , now we are backing up the data at 3 hour intervals and this seems to "slow down" OTRS because our backup system (Bacula , great open source backup solution btw :)) needs to scan the whole article directory. How can we move the MySQL data as well as the article data , without losing any references between the ticket data and the article data? How does OTRS store the ticket data in the DB? As you can see this is quite an unique situation , and any suggestions will be greatly appreciated. Daniel

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Danie schrieb:
We are looking at ways to archive our existing OTRS data to ease the load on our production server. We are using the ArticleStorageFS method of storing emails , now we are backing up the data at 3 hour intervals and this seems to "slow down" OTRS because our backup system (Bacula , great open source backup solution btw :)) needs to scan the whole article directory.
How can we move the MySQL data as well as the article data , without losing any references between the ticket data and the article data? How does OTRS store the ticket data in the DB?
Maybe you can replicate the data base and rsync-update the article storage directory on the archive system? regards, Torsten -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) iD8DBQFF+mEZvXo8m5PgoXQRAurUAJwMBBFOCucA/7VdGlSavWNao/l+rQCfaWqe D5WxmmsdS2Y/VFueMtIy0h8= =gZf4 -----END PGP SIGNATURE-----

Hi Torsten , I think I may have explained myself incorrectly , we actually want to move the data off from the production system , in other words remove the actual entries in the database as well as the articles on the local filesystem and insert them into the archive server database. Torsten Thau wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Danie schrieb:
We are looking at ways to archive our existing OTRS data to ease the load on our production server. We are using the ArticleStorageFS method of storing emails , now we are backing up the data at 3 hour intervals and this seems to "slow down" OTRS because our backup system (Bacula , great open source backup solution btw :)) needs to scan the whole article directory.
How can we move the MySQL data as well as the article data , without losing any references between the ticket data and the article data? How does OTRS store the ticket data in the DB?
Maybe you can replicate the data base and rsync-update the article storage directory on the archive system?
regards, Torsten -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux)
iD8DBQFF+mEZvXo8m5PgoXQRAurUAJwMBBFOCucA/7VdGlSavWNao/l+rQCfaWqe D5WxmmsdS2Y/VFueMtIy0h8= =gZf4 -----END PGP SIGNATURE----- _______________________________________________ OTRS mailing list: otrs - Webpage: http://otrs.org/ Archive: http://lists.otrs.org/pipermail/otrs To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs Support orr consulting for your OTRS system? => http://www.otrs.com/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Danie schrieb:
Hi Torsten ,
I think I may have explained myself incorrectly , we actually want to move the data off from the production system , in other words remove the actual entries in the database as well as the articles on the local filesystem and insert them into the archive server database.
...ah - sorry, that's not what I understood. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) iD8DBQFF+n4gvXo8m5PgoXQRAoWWAJ47OxKFgSdDbok8d3atR8L0noByMQCfY19E pqLkXZ/BHkzbsfk53ng3b2o= =JWeg -----END PGP SIGNATURE-----

On 3/16/07, Danie
I think I may have explained myself incorrectly , we actually want to move the data off from the production system , in other words remove the actual entries in the database as well as the articles on the local filesystem and insert them into the archive server database.
Making a copy of the database and the OTRS install to another machine is still valid. Then, once your archive copy is working properly, setup a GenericAgent task that deletes all tickets older than whatever date you specify from your active setup. I have one that I use if you need an example, it deletes messages in the Spam queue that are over 2 months old and have a status of Closed Unsuccessful. If you want to do this on a very regular basis it doesn't work so well, and you'll need a more integrated process to "move" tickets between installs, but if you just want to clean things out once a year or so it could work fine. Also, obviously you can't search between the installs, which could be an issue. Personally, I'd look at optimizing things so the whole database can run in one place. Just deleting spam for us drastically reduces the size of the database and filesystem article storage. Note that the delete task may have to run several times before all the old tickets/articles are purged. I think there's a timeout that limits it to only running for a certain length of time (but that is speculation). Thanks, Bryan
participants (3)
-
Bryan Fullerton
-
Danie
-
Torsten Thau