Yeah, as an IT-adjacent person I'd be willing to bet that one of the biggest issues is that they are probably not large enough to have a "real" IT department with real process control. Then, add on the fact that they have multiple properties with multiple services running and there are good odds that nobody really understands everything and all of the interconnections. In these types of environments it is very easy for someone to touch something and break something else on the other side of the house, and not even know it. Whenever you see something break and then stay broken this is usually the culprit - the inability to immediately test and rollback is almost always the problem.
And to those that are complaining about the "haters": if they hadn't jacked their prices by more than 50% in one shot, they'd have a lot more fans. Most of us loved them up until that actually happened. They created their own "haters" and now it is up to them to deal with them. Everyone else in the world has figured out how to manage cost increases and spread them over time. This was a conscious business decision and this was the fallout that they should have (and probably did) anticipate.
It is a vicious cycle - small group of people know how X works (and why) at the beginning, X spans over time and people change, lots of different services are using X, because it (IT) just works it is hard to test changes to X or make a platform adjustment/change, next weekend we are going to do an update to do Y, and wackiness ensues.
I feel for the IT people that had to wade in to whatever issue they were having, really hard to be proactive with IT stuff when people pull IT groups in a million directions...until something goes kablooie and then it is high visibility and attention but the proverbial ship has already sailed at that point.
Like being a hostage of your own systems at some point.