Now on Substack!

As part of my growing POSSE strategy (Post Own Site, Share Everywhere), I’ve decided to feature some of my longer pieces—and some original long-form articles—on Substack.

We’ll see how well this works out; AI is currently making a mockery of what was once a great platform.

If you’re interested, I published my three-part series on hiring, degree requirements, and salary as a single, long-form article on the site.

Check it out!

And as always, thank you for reading.

—Nathan

How to get a job in cybersecurity

I went to pick up a friend for lunch a while back, and while I was waiting, I struck up a conversation with one of his coworkers. The young man asked me about my job, and when I told him I worked for a cybersecurity company, he got excited!

“That’s what I’m studying in school,” he said. “Would you mind telling me how you broke into the cyber job market? It would really help me out.”

Here’s what I told him, tongue-in-cheek the entire time:

Step 1: Decide you want to be a physical therapist and spend the first couple of years of college studying biology, human anatomy, chemistry, and exercise science while working in the university’s gym.

Step 2: Realize how much you hate chemistry and biology, then switch your major to jazz studies because you’ve been a musician all your life and want to make a go of it as a drummer.

Step 3: Spend the next 3 years questioning every decision you ever made about becoming a musician and changing your major between history and music every semester until your wife-to-be says, “Why don’t you just double major?!”

Step 4: Graduate with a double major in history and music, then take a job working as an ophthalmic technician for two eye doctors because it’s the first job someone offered you that didn’t pay minimum wage.

Step 5: Become a banker after your wife graduates college because you have to move, and your boss’s daughter just happens to be hiring, and you’ve got a recommendation.

Step 6: Demote yourself to teller (seriously, I willingly took a demotion) because you hate putting people in debt and cold-calling people over dinner to sell them credit cards.

Step 7: Get recruited by Apple on the recommendation of a friend, and learn how awesome you are at teaching. Then spend the next two and a half years honing that craft.

Step 8: Go to work for a child support agency to develop your writing and marketing skills… Because yeah, that tracks.

Step 9: Watch the world go to hell during a global pandemic, lose your job, flounder in unemployment for 8 months, and nearly die from COVID at 30 years old.

Step 10: Find a way to combine your teaching and marketing skills by becoming a content developer for an online business education company

Step 11: Get laid off from that company for no reason at all, only to have a conversation with a former coworker from said company who gets you an interview with your future boss at the cybersecurity company.

There you have it: in just 11 easy steps, you, too, can get a great job in cybersecurity!

He got the joke, and he knew I was trying to be helpful even though I really had no idea how it happened.

But I ended my advice with an offer: if he wanted a job at my company, or any other company where I knew someone, I would give him a recommendation and make an introduction.

Because that’s how I got every single job I’ve ever had. I knew someone (or someone knew me and my work); that led to a conversation where we genuinely connected. That connection often led to a job offer.

I’ve (most likely) applied to more than 1,000 jobs since I started working at age 16. And the only way I ever got an interview was because someone treated me like a human being, had a conversation with me, and made a connection I needed to get my foot in the door.


Speaking of helping other people get their foot in the door, two of the best people I’ve ever worked with (Joe Charman and Rebeca Leininger) are looking for their next roles.

If you need hardworking, technically savvy, AI-fluent, customer-focused, cybersecurity and threat intelligence experts, you’d be extraordinarily LUCKY to have them on your team.

Reach out and connect with them on LinkedIn.

We’re disconnected from work because work is disconnected from life

One reason I think so many of us are feeling disconnected from work is that work itself is disconnected from our lives.

For most of human history, work was literally and directly tied to living our lives. We hunted and gathered to feed ourselves, our families, and our tribes.

After the Agricultural Revolution, we worked the fields and raised animals to directly support how we lived. At the same time, the idea of markets developed, and we began selling the surplus and byproducts of our work. But the selling itself was still directly tied to the work we’d done previously.

Then the Industrial Revolution hit, and humans went to work in factories. And for the first time in our history, our work was separate in every way from our lives.

We left our farms and our cottage industries (aptly named). We stood at assembly lines for 10 or 12 hours a day, away from our “tribe,” moving or assembling widgets that had nothing to do with our day-to-day.1 We didn’t make the parts, build the factory, or come up with the ideas for what we were making.

But it was also physically separate from the rest of our lives. No longer did we work fields on or near our homes or in little shops in the town square.

Instead, we commuted away from home to work in an alien environment in the most anti-human way possible, literally putting our lives on hold for our shift. And the only things we had to show for it were little bits of paper or metal at the end of the week.

And once the Information Age hit, the disconnect became absolute. Now, most of us are completely disconnected from production altogether. We often don’t “make” things anymore. We administer, we meet, we talk about work through digital (no longer tangible) communication methods. And occasionally, we do something like sort of feels like actual work.

Few of us are close enough to the end product of what’s made, sold, and consumed to actually feel what it is we’re doing.

And now we have generative AI, and we’re steadily offloading what’s left of our work to a little homunculus that does everything for us.

So where does this leave us?

We’re heading toward another revolution, but I think it’s different from what all the AI pundits predict.

I think this disconnect is going to become so severe that we’ll push back and seek to return to the roots of human production.

More of us will step out on our own, or join together in small groups to make things that matter for people who care.

I think this AI revolution will end up becoming a revolution of meaning.


  1. Sure, Ford made sure that every American eventually had an automobile of their own. But it’s highly unlikely that you would have made the very same car that you, yourself, were driving. ↩︎

Remember: there are humans involved

There’s one train of thought in discussions about AI that aggravates me the most.

The conversation inevitably, yet casually, turns to when it will be time to eliminate job roles and pass the work off to AI.

I hear it all the time, and there are several problems with it. The least of which is the belief that an AI agent, in the current state of the field, can actually replace a human expert in a field or subject simply because it’s fast and efficient at doing similar work.1

It can’t, at least not yet. And when it’s used as such, the results are often banal (AI slop is aptly named).2

But the issue with these conversations that bothers me the most is the nonchalance with which these comments are made.

We’re talking about people here. Human beings with hopes, dreams, and worlds as rich as those having the conversations. People with families, children, parents, and friends who rely on them.

And we reduce them to job roles?

I’m no Luddite. I know that technology changes the world of work. The steam shovel got rid of ditch diggers, and email eliminated the secretary pool.

AI will eventually have a major impact on the job market: it will change which jobs are available and, for those that remain, what they look like.

But in the meantime, let’s remember that we’re talking about people. And our decisions need to be made with empathy and understanding for the impacts they’ll have on them.


  1. If you actually think that AI is replacing human beings, you need to check out the addendum at the bottom of Cal Newport’s latest blog post. ↩︎
  2. I assert that companies that make these decisions to eliminate jobs and outsource everything will thoroughly regret them in the next few years when the limits of these tools are exposed, and we enter the “trough of disillusionment.”

    The Gartner Hype Cycle is proven and applies to these AI companies as much as they have to every other culture-changing technology we’ve experienced.

    After the hype dies down, most things will go back to normal, except that people will use AI for things that it’s actually good at. ↩︎
By Jeremykemp at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10547051

Why I still write a blog in the age of AI

Blogs are dead. They were at peak relevance about 15 years ago.

Then, algorithmically-driven social media platforms ruined them by providing perfectly curated, mind-numbing entertainment any time you need it.

Now we’re living through another significant moment for tech innovation and the internet.

When AI can write everything for us, why bother writing in a medium that hasn’t been relevant in more than a decade?

Because I refuse to offload my thinking to artificial intelligence.

I use AI every day to help me with work, complete minor tasks, and even sharpen my thinking. But I will not let it think for me.

So I write this blog. It’s not because I have a huge audience. There are only about 300 subscribers, and of those, only a third of you read it regularly.

So it’s definitely not because I’m reaching millions of people or making my living writing here. It’s because I need to do it for my own mental acuity.

It helps me crystallize ideas, expose gaps in my knowledge and logic. It makes me assert, which means I have to consider counterarguments and be okay with potentially being wrong.

So I continue to write. According to my WordPress dashboard, this is my 500th published blog post. That feels like a pretty significant milestone.

And if you’re reading this—thank you. From the bottom of my heart, I am truly grateful.

The em dash exists for a reason

And I’ll be damned if I stop using them now. I’ve been using em dashes since I started writing—because they work.

They do things that other parenthetical devices like commas or parentheses don’t do.

They add force to your arguments. They separate potentially unrelated but still relevant or useful thoughts—have you ever noticed this?—from the point you’re making.

And for those of you who say that there’s no way to recreate them on a computer keyboard…

Shift + Option + – (for Mac users) gives you —.

Here are 11 of them: — — — — — — — — — — —

Also, there are two other types of dashes.

– (press only the dash key) is a hyphen most often used to separate compound words.

– (made by pressing Option + – on a Mac) is called an en dash and can be used to separate things like dates (April 20–23).1

And then you have the glorious em dash.

Why do they show up so often in AI writing? It’s simple: most of the best writers in history made (and still make) liberal use of it—because it works! And because AI has imbibed all the writing ever written, it also uses it quite often.

Does AI use them too much? Absolutely.

Does that mean we should stop using them? Absolutely not.

So what’s the solution? You get really damn good at writing.

Develop a style of your own, a voice, a way of writing that sounds like you—and only you. So when people read your writing, they know that you did it, not an AI.

And you’ll be able to use fifty em dashes in a single piece if you wanted to, and no one would care because they would know, simply because of your personal style, that you were generous enough to take time out of your day to share something worth reading.

If you write well—and like yourself—you’ll be fine.

(Ann Handley actually beat me to this a long while back. But I was so incensed by no less than 7 posts about this yesterday that I had to say something.)


  1. Unfortunately, there is a feature in WordPress’s code that prevents the three different dashes from rendering properly. I didn’t realize this until after I publish. So if you’re reading this on my site instead of in email, you won’t be able to see the difference between them.

    You can test this for yourself: pull up a blank document somewhere on your computer and try all three keystroke combinations.

    Also, this is a great reason to subscribe to my email newsletter, so you’ll see it rendered the way it’s supposed to. ↩︎

The thing about AI (no thanks, Claude)

It makes bad stuff better.

It makes poor writers better. So too for unskilled graphic designers or amateur financial analysts.

But also spam, scams, and cyber threats. It used to be easy to see a phishing email for what it was, but the Nigerian princes are gone.

At the same time, it also makes the great stuff worse.

Great writing becomes banal and repetitive. AI art looks like… AI art (or the boring stuff you see in hotels).

It brings the good stuff down while bringing the bad stuff up. In short, it averages it out.

And that makes sense because LLMs imbibe everything that exists—the great, the good, the average, and the bad.

And in most subjects, the bad always outnumbers the great (or even the good). This lowers the average.

The key is to know what to use it for.

If you’re bad at something, AI will make you better at it. So use it.

But if you’re good (or especially if you’re great) at something… Think twice before handing it off to AI.

I ran this post through Claude and, based on its knowledge of my style, asked it for feedback. Claude told me the post had great bones, but it also told me to remove certain items and phrasing… Precisely the same things that make my writing style what it is.

It told me to remove a tiny explanatory sentence as well (do you know which one?). And I refused to do it, because there are plenty of people who still don’t know how this stuff works. That knowledge is central to the premise of this post.

It also recommended that I end the post with something like, “Average is the enemy of excellence.” How many times have you heard that before? It felt too much like a motivational poster for me—I can already see the kitten attempting pullups with that as the headline.

So, I think I should actually end this post by saying, “Claude, your suggestions were appreciated, but wrong.”1

And that is precisely my point.


  1. I did make a couple of tiny grammatical changes it suggested, which did actually improve the post. And sometimes, Claude’s recommendations actually do improve my writing.

    And, somewhat ironically, Claude did improve this post by giving me feedback, even though the feedback turned out to be flawed. It proved my point in real time. So I must give credit when it’s due.

    But more often than not, Claude’s suggestions make my writing worse because it no longer sounds like me. ↩︎

Excuse me: Is that emergency button made in China?

There’s a blue emergency call tower halfway along a walking trail I frequent each week.

If you’re being attacked or having a heart attack, you smack a button on it, and it immediately calls emergency services and shares your location with them so they can find you—fast.

When I walked by it the other day, I noticed they’d added something new to it. It was a big sign, probably a square foot in size.

And on the sign, printed in big block letters, were the words, “Proudly made in the USA!”

I thought about that sign for the rest of my walk. I just kept thinking, “Who was that for? What was the purpose of that new sign?”

I don’t think it’s for the person in trouble. If you’re being chased by an axe murderer, would you check the tower for a “Made in China” stamp before you pressed the button?

I doubt it.

And if it were manufactured somewhere else, would that really deter you? Oh geez. Made in China?! Gross. I’d rather this guy just kill me than press the button.

It’s not marketing. No one who sees it has any need (or ability) to buy one and stick it in their front yard, so where it’s made doesn’t factor into a buying decision.

Is it supposed to inspire confidence in people like me who walk past it every day? I already know most things aren’t manufactured here, and they typically work just fine.1

The only answer that seems to fit is that it’s for the people who installed it.

It’s a flag. It’s performative patriotism—not for any user of the tower, but for whoever approved the purchase, or whoever installed it. It’s a tribal signal. To paraphrase Seth Godin: “People like us install things like this.”

The sign isn’t communicating with us. It’s communicating about someone else.


  1. In fact, I’ve had such horrible experiences with American-made brands (see: any American car) that it might actually be triggering the opposite effect the sign intended. Now I’m thinking, “Man… Would that thing actually work in an emergency?” ↩︎

Digital dementia

Psychologists also call it digital brain rot.

It describes the forgetfulness, the inability to focus on anything meaningful, the brain fog, and the mental fatigue caused by our chronic overuse of smartphones, digital devices, online games, and social media.

It’s even been shown to reduce the gray matter in our brains associated with emotional regulation, decision-making, and creativity.

The good news is that it only takes a few days to bounce back, but it can be a tough few days.

Try a digital declutter or the phone-foyer method. Find something that works for you, but for your sake and that of future generations, do something.

Everyone worried TV would rot our brains, but smartphones actually are. We know these devices are purpose-built for addiction and harm.

Now is the time to break the cycle.


H/t to Brad Stulberg.

Are you good enough?

That question is impossible to answer. It’s missing a vital fragment.

“At what?”

Until you know what you need to measure yourself against, there’s no point in measuring at all.