During our scheduled maintenance period this morning we had a catastrophic hardware failure across a number of systems including some of our backup systems. We have the capacity and ability to recover from this failure. However, due to the extent of this failure we cannot provide an ETA on when service will be restored, we will continue to provide you updates as they are available to us.
We apologize, we would like to play just as badly as you would.
UPDATE: SEE HERE!
Last edited by Idejder; 06-22-2011 at 03:27 PM.
We attempted to upgrade the processor on our database boxes. Both of our servers were simultaneously taken offline while the new hardware was being put in place. During this period of time, the file systems on BOTH boxes were destroyed. If this boggles your mind to the point that you can't believe it could happen, you would be in the same exact boat we are! It seems almost statistically impossible, however, it is in fact what happened.
This leaves us with no hardware to operate our DBs from until we can get new servers online. The reason for such a catastrophic situation is that this hardware is not the type that is simply sitting around, even in the largest hosting facilities in the world (in which we host at). We are scrambling to bring our systems back online temporarily so you can play the game through other short term methods while we get new hardware online.
S2 Games: Dedicated employees serving dedicated gamers. Continuous development. Never-ending improvement.
Maliken stated the situation pretty plainly but accurate.
A CPU upgrade was performed at the data center on our databases. A chassis swap was performed and resulted in some manner of data loss on the system. I don't know exactly how this happened. Also note, we are trying to recover that lost data which failed the disc check utility from one of those boxes. We are also actively restoring a backup which any IT professional would know that restoring hundreds of Gigs of data takes time. We are currently racing between two system restores both on site and a different location.
At the same time nothing has really changed. We are not sleeping. Everyone is still here working on stuff, testing everything, re-testing everything, moving data, setting up new boxes, testing more, etc etc.
As Maliken said we had a simple hardware update in our remote datacenter go completely bonkers and destroy our databases. As soon as we found out we had to pick our chins up off the floor then used them to say "ARE YOU KIDDING ME?!"
Ever since we have been doing all we can to figure out where we are with backups, new hardware, syncing domains and making sure all of our systems point to the right places and work correctly. That is what we are doing now.
TL DR - Stuff broke. We are still here fixing it. Nothing changed on your end that you need to know about yet. We will be here all night.
We are finishing up some tests and mass-restarting all of the servers to get some new IP configuration information. We will let you know how things look after all of them have reset and connected to the new database.
We are turning everything back on now. The systems, logins, and such will be coming back up slowly.