Hey folks! Or Weis here with another digest 💌
The way we build software is changing. Fast.
Software engineers don’t just write code anymore—they’re designing, iterating, and communicating with AI in new ways. While the fundamentals of engineering remain, the real challenge now is how we express our intent—whether it’s to an AI system, a team member, or even to our future selves.
The funny thing is, it’s a problem we’ve faced before.
(Not in a space-time-continuum disrupting way - just hear me out)
Emails for My Future Self
Years ago, I had a workplace I attended every other week. This resulted in a routine: Every week, I’d send myself an email:
"Hi, dear Or from the future, this is what you’ve done last week. This is what you need to do this week. Good luck. By Or from the past."
Every time, future-me would read it and think:
"Why? Why didn’t you do it well? Why didn’t you leave more docs? Why didn’t you comment this better? Why is this code so bad? Who the hell wrote this?"
(Hint: It was me.)
In time (and it took a while), my skill of translating knowledge across time—from past-me to future-me, improved. And an interesting thing I see happening now is that the exact same skill is now crucial when working with AI.
Communicating with AI = Communicating with Yourself Across Time
As AI takes a bigger role in development, how we document, design, and structure software matters more than ever. AI doesn’t just write (generate) code—it needs clear architecture, design goals, and structured iteration.
The way you explain your intent to AI isn’t that different from explaining your intent to a teammate—or even (when spread across time) to yourself.
This lesson also applies to access control and authorization.
The Art of Translation
Dealing with authorization, one of the biggest questions we hear from customers is:
“How do I model my access control policies to fit my software?”
It’s not just a technical question—it’s a translation problem. You need to bridge multiple domains:
Your app’s architecture
User stories and business logic
Security and compliance requirements
And somehow, all of these have to fit together into a single, cohesive policy model.
It’s a conversation full of trade-offs because you’re not just defining rules—you’re translating real-world business logic into structured policies.
This kind of Domain Translation is already embedded in how we build policies today. Take APIs, for example.
Let’s say someone is making a request to an API endpoint. You could translate it like this:
“If someone is reaching this URL, they’re trying to create a document. I need a policy for creating documents.”
“If someone is querying this database field, they’re trying to read a user profile. I need a policy for user profile access.”
You’re mapping technical actions into policy logic—and that’s the same process we use when designing software, defining security, or even prompting AI.
Prompts as Intellectual Property
Speaking of prompts, people are already starting to treat prompts as valuable assets - and for a good reason. It’s very common to see posts like:
"Steal my prompt to do X!"
"This prompt will 10x your productivity!"
And it makes perfect sense. A good prompt is a domain translation tool. It takes your intent and translates it into structured AI instructions.
Editor’s Note: Building Processes, Not Prompts
From my personal experience with LLMs, creating prompt templates is close to impossible. The vast majority of those you see circulating social media don’t work well outside their original context. They either create results that are painfully generic, or make zero sense after being uprooted from whence they originated.
If you want to truly integrate LLMs into what you do, be it writing, coding, or building access control policies, prompts aren’t enough - the key is establishing structured processes. A process treats AI as a tool, guiding it through multiple stages and domains of knowledge until the right result is achieved.
Daniel
Bridging the Gaps Between Humans, AI, and Policy
Ultimately, this is what good engineering, AI collaboration, and access control all have in common:
They require clear domain translation
They demand structured, reusable processes
And they thrive on good documentation and communication
It doesn’t matter if you’re designing an AI-powered development workflow, an access control model, or just trying to make sense of your own past decisions, the key is building bridges between domains.
We’re still figuring out the best way to do this, and we’re far from done. As tools, processes, and AI capabilities improve, this will become a core skill for every engineer, policy designer, and security professional.
What Do You Think?
I’d love to hear from you—
How do you translate knowledge between different domains?
Have you developed your own structured workflows for working with AI?
Or have you ever been frustrated reading your own past work, wondering “Who the hell wrote this?”
Drop a comment or share this with someone who might enjoy the conversation. Let’s keep figuring this out together.
Until next time,
Or