Submitted by BikerJedi t3_yzdy4r in tifu

Reposted from 8 years ago with moderator permission.

While in college, pursuing a degree in Information Systems, I got a job at now defunct Western Pacific airlines. It was basically a paid internship to do all sorts of computer related stuff. They were a small airline based in Colorado Springs, CO, and then later in Denver. They attempted to take over Frontier airlines and went bankrupt in 1998.

One day before I left work, my boss gives me several long Ethernet cables and tells me "Go patch in the new modems into the computer network." He liked to test me in ways like this from time to time. So I head down to the data room.

Now, I think it is important that you know that I had only been in a real data room twice before, and I had never worked in one. The boss knew this, and wanted to see if I could figure it out on my own or not. For those that don't know, data rooms have raised floors so you can run cables under the floor tiles, lots and lots of racks of computer equipment, tons of AC to keep it all cool, etc.

Anyway, I walk in, find the modem bank, find several modems that have no cables attached. I look at the ones that are wired in, follow the cables, figure out what switch they are plugged into, and wire up the new modems just like those. Power them up and see them connect to the switch on the local network. Then I replace the floor tiles I pulled up and head home for the day.

The next day I come in to work after class is out and my badge doesn't work. The guard tells me to wait. A minute later my boss and two security guards show up and escort me to the CEO's office. No one will talk to me and I'm freaking out. Inside the office, besides the CEO, are the CIO, CFO, my boss and the two bosses above him. They start questioning me.

What did I do yesterday at the end of the day? Did I get the modems working? Did I remove floor tiles? Did I notice anything out of the ordinary? Long story short, I had somehow kicked loose the power cable for the main pyramid server that ran the entire airline. They had no redundancy built into the network for that server. So for 45 minutes, WestPac could do nothing. They couldn't sell tickets, make reservations, board planes, take off, etc. Nothing. I inconvenienced thousands of people. I was told I cost the airline somewhere around $200,000. I don't know if that is accurate or not. Eventually someone noticed that the server had no power and plugged it back in. The airline was back up and running a few minutes later.

I didn't lose my job over that. They all had a good laugh, and admonished me to be more careful in the future. I suggested that they find a way to lock the cable down, but they rejected that idea. Maybe that kind of thinking is what led them to bankruptcy. I wasn't there at the end.

TL;DR: Disconnected a server by accident, the entire company went offline for 45 minutes.

971

Comments

You must log in or register to comment.

whatproblems t1_iwzghsf wrote

no backup and the entire company relies on a single power cord to work…. all you did was expose the single point of failure that should have been fixed

926

BikerJedi OP t1_iwzr5st wrote

There were apparently multiple points of failure there. I later became a network engineer, and when I think back on how they had things set up, it amazes me that were able to function at all.

367

FireEmblemFan1 t1_ix1455m wrote

AUS has entered the chat

18

Kara_Zhan t1_ix1k1oo wrote

I choose to believe AUS means the entirety of Australia

38

FireEmblemFan1 t1_ix1ti3k wrote

It’s an airport in Austin Texas. Power went out for like a day and there was no backup. It’s an international airport too. So yeah. Shit was not fun. For people who had to fly anyway.

Anyway, joke is Texas’ shitty power grid.

19

blbd t1_ix1zf77 wrote

At least it's not Berlin.

1

CN2498T t1_ix6xl5m wrote

Every company has redundancy, but they all have a single point of failure they have not realized yet and won't until it happens.

1

SweetCosmicPope t1_iwzlvz4 wrote

As an IT professional, and former datacenter technician, there’s so much wrong with this story. Several people above you needed to lose their jobs over this. This little eff up wasn’t by you, it came from several steps above you(and your boss).

171

Bitter_Mongoose t1_iwzy5pq wrote

For real lol... who sends an intern, unsupervised, into the core data rack of an Enterprise to randomly plug in equipment, no port assignments or anything like that.

This whole scenario stinks so bad it makes me wonder if you were a weapon used to eliminate the head of IT 😂

108

BikerJedi OP t1_ix0du1g wrote

>no port assignments or anything like that.

Didn't even tell me what rack the equipment was on or anything. I wandered around in there looking at switches and routers and shit until I found the modems. I didn't say shit because I wanted a paycheck and some experience, but yeah, stupid way to do things when you are dealing with critical systems. He should have at least shadowed me in there and watched, even if he wasn't going to say or do shit to help me.

60

Bitter_Mongoose t1_ix0egs3 wrote

Ngl, there was a point in time when I would have done exactly the same thing for the same reasons- blissful ignorance and eager to learn lol.

20

BikerJedi OP t1_iwzra5u wrote

Believe me, I know. My fuckup was really not paying attention to where my big ass feet were going.

31

derKestrel t1_iwzdy7l wrote

That is why server power supplies normally have these little wire clips to secure the power cable.

But in the past, no one had those.

140

somewhereinks t1_ix03zbi wrote

In the early 80's I worked for a large telco that used a magnetic drum storage device (yes I'm old.) The drum was used to store long distance billing for a very large geographic area. It was also powered through a standard wall switch right by the door. One night the cleaner went to turn off off the lights and also turned the drum as well. No one noticed until the next morning. A lot of people got free long distance that night.

It's not the cleaners fault, the blame remains on the idiot that decided that a wall switch without a mechanical guard was a good idea.

56

HilariousSpill t1_ix2c3ii wrote

That’s amazing.

A piece of duct tape is literally all it would have taken to prevent that. It’s not the right solution, but it would literally be that easy.

2

M4NOOB t1_ix0lk16 wrote

They usually also have at least 2 power supplies and therefore at least 2 power cables for redundancy..

8

derKestrel t1_ix157yx wrote

Unless one broke and you never got funds for a replacement...

3

erudite_luddite t1_ix4a6k2 wrote

Haha, this is the answer! A casino in AZ decided to co-opt the back-up systems rather than fix the mains and when a "Flood Of A Century!!!"(1) flooded the basement, they were offline for months. They had fired their services provider the year prior and decided to manage in-house, to cut costs.

(1) It was a typical monsoon storm, happens > 100 times a century.

2

Pinkfatrat t1_ix2pyf9 wrote

I have to laugh at this, because I remember when n+1 power supplies came out , they weren’t always a thing.

1

katatondzsentri t1_ix0w9m6 wrote

Also two power supplies for a single server, ideally ppwered by two different power sources for redundancy...

5

derKestrel t1_ix15c96 wrote

Ideally...

Depending on where you work, not really. When you can't get budget for a broken power supply because "there is still one in it"

4

Antezulu t1_ix0digt wrote

I recently got an update bulletin and had to remove every power cable clip in one of our server rooms. That's gonna be fun later.

0

gellenburg t1_ix00o1u wrote

As someone who has worked in IT for over 30 years that was not your fuck up in the slightest.

Not even by a longshot.

100% on your boss.

He knew you were inexperienced and he told you to do something you were wholesale unqualified or prepared to do.

But the real fuckup is the CIO, CTO, CFO, and CEO for not having a disaster recovery or business continuity plan.

Good riddance.

78

BikerJedi OP t1_ix0demc wrote

Yeah - as I later became a network engineer, I used it as a learning experience. When I became a manager, I made sure I showed my people how to do shit and never "tested" them this way.

It did absolutely amaze me that a single server was that critical to the business. Even the smallest company I worked for had several backups.

31

yanbu t1_iwzypc4 wrote

My first job out of college was at Boeing. While I was there a guy I worked with someone accidentally plugged a cable from a switch back into itself and took down the entire building’s network. Thousands of people, including the AOG engineering team were suddenly not able to work. The networking team couldn’t figure out what had happened, so they eventually just told everyone who worked in the building to go home and work from there if you could. It’s funny how fragile we allow critical infrastructure to be sometimes.

23

BikerJedi OP t1_ix0d4ly wrote

A competent network engineer would have seen where the packets were getting lost and should have been able to figure the problem out. Crazy.

6

yanbu t1_ix0dobq wrote

Well this was almost 20 years ago at this point, no idea what kind of tools they had back then. And they did get it figured out eventually. Funny stuff though.

5

BikerJedi OP t1_ix0edgi wrote

It would have been routine. Figure out where the packets are getting lost, then go physically check the device if a remote reboot doesn't work or can't be performed. Follow standard troubleshooting. Check the cables, hard reboot it, etc. You would eventually notice it was plugged into itself if you check and trace all the cables.

6

wyrdough t1_ix122fb wrote

Sure, it's that easy if the loop is actually at the switch or rack of switches. If it's some random place out in the building or worse a particular PC happens to have multiple network cards plugged into separate ports and someone inadvertantly enabled bridging on them, it can be a lot harder to find.

5

[deleted] t1_ix1j873 wrote

[deleted]

5

BikerJedi OP t1_ix1y03t wrote

>I agree, NE should have caught this pretty quick, it’s usually an easy thing to rule out for sudden unexplained widespread pocket loss.

Yep. It was literally one of the practical parts of my CCNA exam.

4

Catshannon t1_iwzpaxd wrote

Seems lack a bad design with one server, no backup power source etc

Heck they should have had 2 servers with an ups at least. What would they have done if a server actually broke or had to go down for maintenance or something

21

Lee2026 t1_ix00kvy wrote

Honestly, that was handled pretty well. They realized it was an honest mistake and instead of reprimanding you, they provided a learning experience.

7

BikerJedi OP t1_iwzfsbl wrote

If anyone enjoyed my writing, I also write over in /r/MilitaryStories about my time in the Army. Come check us out.

6

Apollyom t1_ix3u2w6 wrote

Didn't pay attention to who it was that wrote this when reading, seemed like a familiar story, but i didn't read it 8 years ago, then saw you earlier in comments. But everyone should give your stories over there a read.

2

BikerJedi OP t1_ix46rqd wrote

>But everyone should give your stories over there a read.

We have so many other amazing authors far better than I am as well. Thank you for the fandom though, it is appreciated.

1

Apollyom t1_ix6e7af wrote

we all know Anathema, is one of the better ones, but we gotta get them there with yours and hooked on the others. The coastie stories are pretty great right now.

1

magicbluemonkeydog t1_ix05r81 wrote

I did exactly the same thing in my first data centre job. Managed to knock loose a power cable without noticing, took down a whole floor of the data centre without even noticing. Got back to the office and everyone was panicking 😅

6

KRed75 t1_ix1fck4 wrote

I once pulled up a tile in our data center and there was 2" of water on the floor. I don't know how long it was like that but it didn't cause a single outage. I located the broken pipe under the raised floor, installed a patch and grabbed the sump pump. Ran a hose outside and started pumping water.

We kept the patches, sump pumps and hoses specifically for a situation like this.

Nobody ever said a word about it to me or anyone else which is surprising because we had thousands of servers for hundreds of customers all over the world in that data center. Now we only have dozens of servers with shitloads of RAM and CPU cores running virtualization software to handle about 10 times the number of customers. I haven't been in the data center in 15 years. We used to have hundreds of people in that building. Now it's only about 20. We walk them through handling hardware installations, removals and repairs and we do everything else remotely.

5

ShowLasers t1_ix1pmu4 wrote

Pyramid. Now there’s a name I haven’t heard in a long time.

5

BikerJedi OP t1_ix1xren wrote

I know. If the date in my title didn't tell you how old I am, that sure gave it away.

3

ShowLasers t1_ix2a6ps wrote

It's gotta be right around the same time I encountered one of those in the Oracle data center. That place was wild. So many niche and special purpose machines. Sequent, Pyramid, Tandem and the more common Suns and DEC boxes of the day. No racks in that section, just big sprawling floor monsters.

2

n33bulz t1_ix0qa8n wrote

British Airways in 2017: Hold my beer

4

6018674512 t1_ix1u1jl wrote

I too once costed an airline in Colorado Springs a ton of money by basically breaking a latch that was very important so important the plane couldn’t fly. It was grounded the whole day so every flight was canceled. Two different repair crews had to be driven in. But at least mine was because I was legit fucking around. Oops.

4

BikerJedi OP t1_ix1xkya wrote

>Oops.

>I was legit fucking around.

Fucking gold. You made me laugh, so thanks. :)

1

6018674512 t1_ix1z7u3 wrote

That’s what I do. I break expensive things and make people laugh.

3

FreeThinkInk t1_ix1c2qm wrote

Notice how they automatically tried to pin this all on the "intern." Red flags galore at this place.

3

Retchers t1_ix1t9zt wrote

Shit company with no redundant servers. Best they died

3

BikerJedi OP t1_ix1xp8k wrote

I didn't shed a fucking tear when they went under. I went home the day the news hit the newspapers and told my wife about it. "Hey /u/griffingrl, remember, the shitty company that fired me last year? Fucking lol."

5

mypcrepairguy t1_ix04k9s wrote

Love the story! Thanks for the chuckle.

2

BikerJedi OP t1_ix0dzs1 wrote

Sure thing! I love to write, so that is what I do a lot of here on reddit. I've done a fair bit over the years at /r/MilitaryStories if you would like to check us out.

2

formerly_gruntled t1_ix1a9yr wrote

That's actually good management. I give them credit for not firing your ass for a simple error. Not all managers have the balls to ignore the dollars and focus on the honesty of the mistake.

2

Pinkfatrat t1_ix2psrm wrote

I had a few pyramids, and it was dead easy to kick the cable out of the back. I got a warning from a ce about it, around ‘97 . Makes me wonder if it was related.

Stupid free standing mini computers that were bigger than a rack so lots of cabling issues. Don’t start me on running 64 serial cables out the back

2

badforman t1_ix2ukkr wrote

I would not fire you either, you are a 200,000 investment at that point.

2

FeeFiFoFumIHaveAHung t1_ix387ny wrote

Wouldn't it be funny if your boss went into the room after you were done to check your work and he was the one to knock the cable?

2

ginger_gcups t1_ix2wyv8 wrote

Hey, could be worse, you could have tanked a major media company in two weeks costing yourself - and other companies - eleven figures...

1

BikerJedi OP t1_ix37tej wrote

That is definitely more impressive than what I did.

1

JustMirror5758 t1_ix1bct5 wrote

Reposted from 8 years ago? Loser there is so much more interesting things going on now.

−6

BikerJedi OP t1_ix1btkw wrote

Cool story bro. How sad are you that took the time to not only post this comment that adds nothing to the discussion, but you are calling me the loser. Clearly most people disagree. Look at the votes and discussion. I'm a writer. Entertaining people is what I do in my spare time. Have you never read a book twice? Or more?

Have a good day.

7