Corporate IT Fluency Footguns¶
- Using "incident," "problem," and "change" interchangeably. You say "we have a problem with the database" in a meeting. The ITSM-fluent people in the room hear "a root cause investigation is underway" when you meant "the database is down right now." The Incident Manager does not get paged because nobody filed an Incident — you called it a Problem.
Fix: Incident = something is broken now, restore service. Problem = why does it keep breaking, find root cause. Change = planned modification to production. Use the right word. If the database is down, say "we have an active incident."
Under the hood: These terms come from ITIL (Information Technology Infrastructure Library). In ITIL's framework: an Incident has urgency (restore service NOW), a Problem has a root cause investigation, and a Change has a risk assessment and approval flow. Mixing them up in an ITIL-mature org doesn't just cause confusion — it can route your request to the wrong workflow engine, the wrong team, and the wrong SLA.
- Deploying without a change request in a CAB-governed org. You merge to main and your CI/CD pipeline deploys to production. Nobody told you that all production changes require CAB approval. The deployment causes a minor blip. Now you have an unauthorized change on the audit trail, your manager gets a call from the change manager, and the security team adds your pipeline to their watch list.
Fix: On your first week, ask: "What is the change management process for production deployments?" Understand whether your team has pre-approved Standard Changes or if everything goes through Normal Change. Build the approval step into your CI/CD pipeline if needed.
- Ignoring CMDB updates after infrastructure changes. You decommission three servers and spin up five new ones. You update DNS, monitoring, and the wiki. You do not update the CMDB. Three months later, someone checks the CMDB during an incident and routes troubleshooting to servers that no longer exist. Forty-five minutes are wasted before someone realizes the CMDB is wrong.
Fix: Add CMDB updates to your deployment checklist. If your org has automated discovery, verify it picked up your changes. If it is manual, update it the same day. Stale CMDB entries are invisible debt — they only hurt you during incidents when you can least afford it.
- Presenting technical details to business stakeholders instead of impact. You explain to the VP that "the PostgreSQL connection pool is exhausted because max_connections is set to 100 and PgBouncer is not configured." The VP's eyes glaze over. They do not care about connection pools. They care about "the payment system went down for 20 minutes and we lost an estimated $15,000 in transactions."
Fix: Lead with business impact (revenue lost, users affected, SLA breach). Follow with root cause in one sentence. End with the fix and its cost. Save the technical details for the engineering postmortem.
- Not knowing whether your project is CapEx or OpEx. You propose migrating from on-prem to AWS. Finance asks "is this CapEx or OpEx?" and you stare blankly. The CFO's reaction to your project changes dramatically depending on the answer because it affects how the cost appears on the balance sheet.
Fix: On-prem hardware = CapEx (depreciated). Cloud services = OpEx (expensed monthly). Large software development projects can sometimes be capitalized. Ask your finance business partner — yes, your team has one — before presenting budget requests.
Gotcha: Cloud migrations often shift spend from CapEx to OpEx. This sounds neutral, but it changes how the cost appears on the income statement. CapEx is depreciated over 3-5 years (spread out), while OpEx hits the current quarter's P&L fully. A $1M on-prem server farm (depreciated at $200K/year) replaced by $250K/year cloud spend looks like a cost increase on the P&L even though the total spend is lower. Finance cares about this distinction deeply.
- Confusing SLA, OLA, and UC. You promise a customer "99.99% uptime" without checking what your internal team OLAs and vendor UCs actually support. Your network team's OLA is a 4-hour response time for P2 issues, and your cloud provider's UC guarantees 99.9% (not 99.99%). You just promised something your dependencies cannot deliver.
Fix: SLAs are backed by OLAs and UCs. Before agreeing to any SLA number, trace the dependency chain: what internal and external agreements support this commitment? Your SLA can only be as good as your weakest underpinning agreement.
- Treating "let's take that offline" literally. In your first corporate meeting, someone says "let's take that offline" about your concern. You schedule a follow-up meeting. Nobody responds to the invite. You follow up twice. Still nothing.
Fix: "Take it offline" usually means "we're not discussing this now and possibly never." If the topic is important to you, do not wait for the other person to schedule. Follow up directly with a short message: "You mentioned taking X offline — when works for you?" If they dodge twice, escalate through your manager or raise it again in the next meeting.
- Not knowing what "IC" means on a job listing. You see a job posting for "Senior IC — Infrastructure" and assume IC means some internal company acronym. You do not apply because the title is confusing. It means Individual Contributor — a senior technical role with no people management responsibilities. You just skipped a perfect fit.
Fix: IC = Individual Contributor (does the technical work, no direct reports). EM = Engineering Manager (manages people). "Staff IC" or "Principal IC" are very senior technical roles. When a listing says "IC track," it means you can advance in seniority without managing people.
- Saying "I don't know" to a SOC2 auditor without following up. The auditor asks about your secret rotation process. You say "I'm not sure, I think we rotate annually" and move on. The auditor writes down a potential finding based on your uncertain answer. Your actual process is solid — you rotate quarterly via Vault — but now the audit trail has your vague answer.
Fix: If you do not know, say "I'll need to verify that and get back to you with documentation." Then actually follow up within 24 hours with the correct answer and evidence. Auditors expect this — it is far better than guessing.
-
Underestimating the politics of "buy vs build." You spend two weeks building an internal tool that works great. Then you discover that procurement already signed a contract with a vendor for the same capability three months ago. Nobody told you because the decision was made at a director level you do not have visibility into. Your work gets shelved.
Fix: Before building anything significant, check: "Has anyone already evaluated or purchased a tool for this?" Ask your manager, check the vendor/tool inventory, and search Confluence/SharePoint for RFP or POC documents. Ten minutes of research can save two weeks of wasted effort.
Remember: The three questions to ask before building an internal tool: (1) "Does a vendor tool already exist in our stack?" — check with IT procurement and search the CMDB. (2) "Has another team already built this?" — search internal repos and Slack. (3) "Will I maintain this in 2 years?" — if not, the tool will rot and become a liability. If the answer to #3 is no, the vendor tool is almost always the better choice regardless of fit.