I was having a really frustrating problem in that long-running queries to a (MS) SQL Server database were resulting in a TimeoutError being raised. After much research I determined that the issue was with the ADO driver's CommandTimeout property. It needed to be set to be much higher. From circumstantial evidence, it seemed to timeout after 30 seconds.
Setting that property via ActiveRecord was tricky. The property can be set directly against the underlying DBI database handle object. That object can be gotten at through the ActiveRecord::Base.connection object.
The name of my ActiveRecord model is IDX. So, I set the timeout with:
IDX.connection.instance_variable_get(:@connection).handle.instance_variable_get(:@handle).setproperty('CommandTimeout', 7200)
That works but, are you kidding me?!
Credit goes to the author of the blog at http://www.enterpriseetc.com/post/Ruby-in-Practice.aspx. His update at the very end ultimately pointed me in the right direction.
22 February 2008
19 February 2008
Moving to PostgreSQL
We had a mini-throwdown on database servers yesterday. After some research, we determined that we could not use MySQL in one of our commercial products without purchasing the commercial license for MySQL. That license costs $600/year/installation. That kind of money adds up over 40 installations.
Given that, we have decided to try using PostgreSQL instead. It is a robust database with plenty of good software APIs written around it. We need to be able to connect with it via PHP, Phython, Ruby, and ODBC and as far as I can tell, all of those will work just fine. PostgreSQL, besides being robust, is absolutely free for us to use in our commercial application without a requirement to open source the code. We have even decided to try using it in our custom applications too.
I spent some time in postgres yesterday and got up to speed quickly. It has a wealth of enterprise-worthy features but I'm sure that we won't be using those. If I'm wrong, then, hey, at least they are already in the server and we can scale up as needed. One explicit technology decision that my company has made is to use the back-end database simply as a data store and not to use its advanced features such as views, triggers, and stored procedures, unless it becomes necessary on a case-by-case basis. We are pushing all of that business logic into the application layer so that it will all be in one place and will be easily maintainable by multiple developers. The goal is for our code to be database agnostic. The trade-off is a sligtly slower application.
Now my issue is selecting getting an ORM for Python to work with both MySQL and PostgreSQL. More to come...
Given that, we have decided to try using PostgreSQL instead. It is a robust database with plenty of good software APIs written around it. We need to be able to connect with it via PHP, Phython, Ruby, and ODBC and as far as I can tell, all of those will work just fine. PostgreSQL, besides being robust, is absolutely free for us to use in our commercial application without a requirement to open source the code. We have even decided to try using it in our custom applications too.
I spent some time in postgres yesterday and got up to speed quickly. It has a wealth of enterprise-worthy features but I'm sure that we won't be using those. If I'm wrong, then, hey, at least they are already in the server and we can scale up as needed. One explicit technology decision that my company has made is to use the back-end database simply as a data store and not to use its advanced features such as views, triggers, and stored procedures, unless it becomes necessary on a case-by-case basis. We are pushing all of that business logic into the application layer so that it will all be in one place and will be easily maintainable by multiple developers. The goal is for our code to be database agnostic. The trade-off is a sligtly slower application.
Now my issue is selecting getting an ORM for Python to work with both MySQL and PostgreSQL. More to come...
Subscribe to:
Posts (Atom)