OTRS behind Haproxy and GlusterFS

Hi all, I think I have a rather complicated but intriguing question for the mailing list... I succeded in assmbelying and having up& running a 6 node cluster with apache running clustered and proxied by HAProxy. The most relevant thing of this structure is that the web root in on top of a glusterFS volume shared between the 6 nodes, so basically each apache server serves pages from the same cluster and every file upload is immediately available to other nodes. This approach is working quite well with a couple of web applications, and I was wondering about the possibility to migrate our OTRS from a single, standalone server (with Postgresql DB on a dedicated machine), to the cluster. What worries me is not the session problem because the otrs installation would remain a single one (staying in the clustered fs), but the scheduler and all the scheduled maintenance operations that otrs does during the day, the postmaster activities, and so. Has anyone had any experience with OTRS on top of a cluster with GFS2? Is my approach at least theoretically ok or am I doing something really wrong in your opinion? Thanks a lot -- *Lynx International Srl* *Stefano Finetti* *IT Manager* *http://www.lynx-international.com http://www.lynx-international.com* -- This e-mail and any files transmitted contain documentation which is highly confidential and intended solely for the use of the individual or entity to whom they are addressed. All written data and other information in the documentation is and shall remain the property of the disclosing party. If you are not the intended recipient you are hereby formally notified that any disclosure, dissemination, forwarding, storing, copying or use of any of the information is strictly prohibited and will be legally pursued. If you received this in error, please contact the sender and destroy the documentation including deletion of the same from any computer. Thank you.

Out of the box, OTRS still has some rough edges for clustered configurations. It still stores some configuration info for individual nodes in node-local storage, which can be tricky to manage and still keep the cluster configuration consistent. We did some mods to move all configuration information into the database, but it was for an old version of otrs 3. I'm trying to bring those mods up to date in my copious free time (not!), but it's going slowly.
Has anyone had any experience with OTRS on top of a cluster with GFS2?
Use gluster or Ceph if you try this. Both are better behaved than raw GFS2. If you have the Otrs configuration managed by something like puppet and do some unnatural things with unionfs, you can make it work, but it's a bit fragile for production use.
Is my approach at least theoretically ok or am I doing something really wrong in your opinion?
See above. Right now, you need patches or some very creative configuration management to make this setup work.

2015-05-23 9:02 GMT+02:00 David Boyes
Out of the box, OTRS still has some rough edges for clustered configurations. It still stores some configuration info for individual nodes in node-local storage, which can be tricky to manage and still keep the cluster configuration consistent. We did some mods to move all configuration information into the database, but it was for an old version of otrs 3. I'm trying to bring those mods up to date in my copious free time (not!), but it's going slowly.
Hi and thanks for the answer. What do you mean with "some configuration info for individual nodes"? I was thinking that having a single otrs on a clustered fs would avoid to have any file stored elsewhere than the cluster... Does otrs save something, let's say, on /tmp or /var directories that can't be configured to be put into the mounted cluster?
Use gluster or Ceph if you try this. Both are better behaved than raw GFS2. If you have the Otrs configuration managed by something like puppet and do some unnatural things with unionfs, you can make it work, but it's a bit fragile for production use.
Ops, sorry, I wrote GFS2 but I meant gluster fs. This is a Gluster Filesystem cluster. My mistake. -- Stefano -- This e-mail and any files transmitted contain documentation which is highly confidential and intended solely for the use of the individual or entity to whom they are addressed. All written data and other information in the documentation is and shall remain the property of the disclosing party. If you are not the intended recipient you are hereby formally notified that any disclosure, dissemination, forwarding, storing, copying or use of any of the information is strictly prohibited and will be legally pursued. If you received this in error, please contact the sender and destroy the documentation including deletion of the same from any computer. Thank you.

2015-05-23 9:02 GMT+02:00 Hi and thanks for the answer. What do you mean with "some configuration info for individual nodes"? I was thinking that having a single otrs on a clustered fs would avoid to have any file stored elsewhere than the cluster...
I probably should clarify. What we were trying to do was to have multiple copies (1 on each node). We kept finding places where the location and version of the syscofig blobs were getting overwritten, and unless you did some creative things with overlaying filesystems on node n>1, the additional instances would come up as node 1 and Bad Things happened. I went hunting through the source, and moved every reference to a local file into a database object in a node-specific table. and added a --node option to identify which node that instance represented. That was in otrs 3. Recent versions have been better. I should try it again. I suspect if you only tried to run on one node at a time it probably would be ok, but that kinda defeats the point of the cluster. You probably could hunt down all the directory references in the source; I don't know if they're all exposed to syscofig.

Assuming that the whole otrs directory would be in the clustered fs,
directory references would not be the same for each node of the cluster?
They would all share the /var/www (let's say) directory, where otrs would
be placed. I'm not sure I understand which references you're talking about.
I don't mind, by the way, to write some code or edit some configuration, I
have quite often written my own perl modules for OTRS to customize our
installation here, using che /Custom directory structure.
2015-05-23 9:40 GMT+02:00 David Boyes
2015-05-23 9:02 GMT+02:00 Hi and thanks for the answer. What do you mean with "some configuration info for individual nodes"? I was thinking that having a single otrs on a clustered fs would avoid to have any file stored elsewhere than the cluster...
I probably should clarify. What we were trying to do was to have multiple copies (1 on each node). We kept finding places where the location and version of the syscofig blobs were getting overwritten, and unless you did some creative things with overlaying filesystems on node n>1, the additional instances would come up as node 1 and Bad Things happened.
I went hunting through the source, and moved every reference to a local file into a database object in a node-specific table. and added a --node option to identify which node that instance represented. That was in otrs 3.
Recent versions have been better. I should try it again.
I suspect if you only tried to run on one node at a time it probably would be ok, but that kinda defeats the point of the cluster. You probably could hunt down all the directory references in the source; I don't know if they're all exposed to syscofig.
--------------------------------------------------------------------- OTRS mailing list: otrs - Webpage: http://otrs.org/ Archive: http://lists.otrs.org/pipermail/otrs To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
-- *Lynx International Srl* *Stefano Finetti* *IT Manager* *http://www.lynx-international.com http://www.lynx-international.com* *Lynx International Srl is a part of AXED Group* Via Pier Luigi Nervi e/3 - Torre 6 - 04100 Latina Skype: *ssfinetti* Mobile: +39 348.38.58.165 -- This e-mail and any files transmitted contain documentation which is highly confidential and intended solely for the use of the individual or entity to whom they are addressed. All written data and other information in the documentation is and shall remain the property of the disclosing party. If you are not the intended recipient you are hereby formally notified that any disclosure, dissemination, forwarding, storing, copying or use of any of the information is strictly prohibited and will be legally pursued. If you received this in error, please contact the sender and destroy the documentation including deletion of the same from any computer. Thank you.
participants (2)
-
David Boyes
-
Finetti, Stefano