
Hacked & Secured: Pentest Exploits & Mitigations
If you know how attacks work, you’ll know exactly where to look—whether you’re breaking in as an ethical hacker or defending as a blue teamer.
Hacked & Secured: Pentest Exploits & Mitigations breaks down real-world pentest findings, exposing how vulnerabilities were discovered, exploited, and mitigated.
Each episode dives into practical security lessons, covering attack chains and creative exploitation techniques used by ethical hackers. Whether you're a pentester, security engineer, developer, or blue teamer, you'll gain actionable insights to apply in your work.
🎧 New episodes every month.
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram, Website Link
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
Hacked & Secured: Pentest Exploits & Mitigations
Ep. 11 – Account Takeover, Token Misuse, and Deserialization RCE: When Trust Goes Wrong
One flawed password reset. One shared session token. One dangerous object.
In Episode 11 of Hacked & Secured: Pentest Exploits & Mitigations, we break down three real-world vulnerabilities where trust between systems and users broke down—with serious consequences.
- Account Takeover via Forgot Password – A predictable ID and exposed tokens let attackers reset passwords without access to email.
- Session Hijack in OTP Login – A logic flaw in how login tokens were handled allowed full account access with just a user ID.
- Remote Code Execution via Java Deserialization – A community-contributed finding where an exposed service deserialized untrusted input, leading to code execution.
These aren’t complex chains. They’re common mistakes with big impact—and important lessons for developers, security teams, and testers.
Chapters:
00:00 - INTRO
00:59 - FINDING #1 - Account Takeover via Forgot Password
06:26 - FINDING #2 - Shared Session Token in SMS Login Flow
10:39 - FINDING #3 - Java Deserialisation to Remote Code Execution
16:13 - OUTRO
Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!
🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us → podcast@quailu.com.au
🔗 Podcast Website → Website Link
INTRO
Welcome to another episode of Hacked & Secured: Pentest Exploits & Mitigations, I’m your host, Amin Malekpour.
Today, we're unpacking three real-world exploits that prove one thing: trust is the weakest link.
· A forgot-password IDOR for full account takeover.
· A shared session token flaw that let attackers reuse SMS logins.
· And a classic Java deserialization to remote code execution exploit that turned unvalidated input into a server-side shell.
Let’s break them down.
FINDING #1 - Account Takeover via Forgot Password
Have you ever test a password reset flow and think—this can't be that simple to break?
That’s exactly what Cristi Vlad described in his write-up on Medium.com. He found an account takeover vulnerability hiding inside the "forgot password" feature, a classic place pentesters always check, but so often overlooked by developers.
This is the kind of bug that doesn’t need fancy payloads or scanners. Just careful observation. Let’s break it down.
First—what kind of vulnerability is this?
It’s an IDOR. Insecure Direct Object Reference.
But not in the typical “view someone’s order” way.
Here, it was baked right into the password reset flow.
Cristi was hired to do a pentest on a web application. As always, he made sure to manually test every authentication flow. Tools are great, but this kind of bug is easy to miss if you don’t think like an attacker.
He started with the classic "Forgot Password" link.
Click it. Enter email. Get the reset email.
The email arrived right away. It had a link that looked safe. It was long, encrypted, and not guessable. Something like:
"email-domain dot com slash c slash verylongencryptedstring"
But when he clicked it, the link redirected him to another page on the main app. And that URL was very interesting.
It looked like:
"someendpoint slash confirm dot php question mark u equals long number, t equals hash, x equals another number."
That “u” parameter caught his eye immediately.
It looked like a user ID.
A long numerical value.
So he thought—what would happen if I change these parameters?
First, he tried something simple. He removed the “x” parameter entirely and refreshed the page.
Guess what? Nothing broke. The page still worked fine.
No error. No session invalidation.
That was his first clue the server wasn’t validating the link properly.
Then he took the next step.
He incremented the “u” value by 1.
And suddenly—the page loaded, showing another user’s email address in the reset form.
It literally said:
“Your email is anotheruser@targetcompany.com. Enter new password. Confirm password.”
At this point, he wasn’t sure it would work. So he created a second account of his own in another browser.
He copied that account’s user ID from its profile page. Then he pasted that user ID into the reset URL.
When he loaded the page—it worked.
He was able to set a new password for that account and then logged in as that second account.
That’s full account takeover.
No special tools. No code injection. Just changing a single number in the URL.
Now let’s pause for a second. what do you think the next step could be?
You’ve just realized you can reset any account’s password just by knowing or guessing their user ID.
why not trying random numbers to find other accounts? Or enumerating IDs to target specific roles?
Cristi did exactly that.
He didn’t just stop at one test account.
He noticed the IDs were long but sequential.
He tried decrementing the ID down—guessing lower values to see who else he could hit.
Eventually, he found the account of a high-level user.
Even the CEO’s account was within reach.
That’s the risk when IDs are predictable and access controls are weak.
And here’s another subtle flaw.
The URL had an “x” parameter, which was probably supposed to act like an expiry mechanism.
If you removed it entirely—the page worked without it.
That bypassed any expiration logic completely.
So even old reset links could be reused.
Let’s talk about why this happened.
The initial reset link in the email was safe enough. It used an encrypted long string.
But when the user clicked it, the app handed over the keys—exposing raw IDs and insecure parameters in the final redirect.
That’s the mistake.
They assumed encryption on the first link meant they were safe. But by trusting the client with direct access to sensitive parameters in the second step, they lost all security.
So if you’re designing building or maintaining applications, here’s what to do.
· Never expose raw user IDs in password reset URLs.
· Use secure, time-limited tokens stored server-side that map to a specific user and expire after one use.
· Don’t let users modify critical parameters. Validate everything on the server.
· And test your own flows manually. Don’t assume the design is safe just because it looks encrypted.
And for the pentesters out there…
· Don’t skip any feature. Always test password resets. Change every parameter you see. Remove them. Increment or decrement them. See what breaks. Look for IDs that are predictable.
This is where real-world account takeovers hide. No fancy payloads. No zero-days.
Just curiosity and a little patience. And that’s what makes all the difference.
FINDING #2 - Shared Session Token in SMS Login Flow
We started this episode with a forgotten password flow that let you take over any account just by tweaking a single ID in the URL.
But what if I told you there’s another common feature that can go wrong in a similar way?
What if the weakness isn’t in the password reset link… but in the SMS login itself?
That’s exactly what yetanotherhacker found and shared on HackerOne—a flaw in the SMS-based authentication that let them hijack user sessions without ever knowing the victim’s verification code.
It’s basically an authentication logic flaw—where the server trusted users too much during the SMS login process.
Now, let’s dive into how this attack actually worked.
In a typical SMS login, you enter your phone number. The server sends you a verification code by SMS. You type that code in, and you’re logged in. Seems safe, right?
Here’s how this app was designed to do the SMS login.
When the user entered their phone number, the app sent a request to something like /SessionCreate. The server replied with a session token which was needed later, along with the verification code, to call /SessionVerify and complete login.
Everything seemed normal right? So what do you think was the issue?
The issue was that if the attacker also called /SessionCreate with the victim’s phone number, the server gave them the exact same session token.
No new token per request. Just the same one, over and over.
Let’s think about that.
It meant both the real user and the attacker held identical keys. Neither one was valid yet—they still needed the verification code sent by SMS.
But once the real user entered that SMS code and completed verification, that same session token became valid for both parties. Now the attacker had a fully valid, authenticated session for the victim’s account. The only thing needed is to keep polling the server to see if the session token suddenly became valid.
The attacker could keep checking the session by calling an endpoint like /Me. At first, it would return 401 Unauthorized. But once the victim entered their SMS code, that same request suddenly succeeded, returning user info, phone number, even friend connections.
From there, the attacker had full access and could make any other request the user could.
This was full account takeover, with no SMS code stolen, no phishing. Just exploiting poor session management.
Are you wondering why did this happened in the first place?
Because the server didn’t create unique session tokens for each request to /SessionCreate.
It reused the same token for anyone who knew the phone number, meaning the attacker and victim ended up sharing it.
Once the real user completed SMS verification, the attacker’s copy of the token was just as valid.
So for anyone building or designing authentication flows, here’s what you need to do:
· Make sure every call to /SessionCreate returns a new session token.
· Send a new SMS code every time, and invalidate the old one.
· Rate-limit those endpoints so attackers can’t spam them.
· And don’t assume that just because the user enters their phone number, they’re the right person.
And for my fellow pentesters out there…
· This is a classic case for manual testing. Don’t just enter your own number and verify it. Try someone else’s number. Repeat the session creation call multiple times. Check if the tokens are identical. See if verification for one user unlocks access for everyone who holds that token.
It’s the kind of bug no scanner will find. You have to think like an attacker. Look for places where developers forgot that two people might be holding the same key.
And that’s how real compromise happens—not with flashy zero-days, but with quiet, subtle trust failures in the way systems are designed. At the end of the day, it only takes one shared token for attackers to walk right in.
FINDING #3 - Java Deserialisation to Remote Code Execution
We started this episode with an insecure password reset flow that let you take over accounts just by changing a number. Then we saw how sharing the same session token in SMS login let attackers hijack someone’s session without ever seeing their code.
But now, let’s look at something even more dangerous.
What if you could send a single request to the server and make it run any command you want?
That’s what Sia one of our listeners, contributed to our podcast. This is a classic—but still very real—problem with Java deserialisation.
Now, for anyone listening who aren’t familiar with this, let’s explain.
Serialization is a way to turn complex data—like objects in Java—into a format you can store or send over the network. Think of it like flattening a 3D model into instructions on paper so you can mail it somewhere.
Deserialization is the reverse. You take those instructions and rebuild the original object.
But here’s the risk: if the server blindly trusts whatever serialized data you send, you can send instructions to build anything—even objects that execute code.
Imagine a toy factory that gets blueprints in the mail. Normally, they expect safe designs—like cars, dolls or something like that. But what if an attacker sends a blueprint for a bomb, and the factory just builds it automatically without checking? That’s deserialization without validation.
So here’s how Sia found the issue in the app he was testing.
He noticed some endpoints in the app accepted large blobs of Base64-encoded data in POST requests. That immediately looked like serialized Java objects.
If you’re a pentester and you see that, you need to pay attention.
Sia intercepted the request with a proxy. He decoded the data and confirmed it had Java serialization headers—those magic bytes that Java uses to say, “Hey, this is serialized.”
That was the first big clue. But he didn’t stop there.
He tried sending random data to see what the server would do. The app crashed with a deserialization error.
Error messages can be really helpful. They tell you what classes the app tried to load. In this case, the stack trace mentioned Apache Commons Collections—a well-known Java library with deserialization gadget chains.
If you’ve studied these attacks, you know Apache Commons Collections is notorious for this. It has classes like InvokerTransformer that can be used to execute commands when deserialized.
Alright, if you know the server deserializes untrusted data and it uses a vulnerable library, what’s next?
You craft a payload right?
Sia used ysoserial—a popular Java deserialization exploit generator. You pick your gadget chain, specify the command you want to run—like listing files on the server—and it outputs a malicious serialized object.
Then he went back to Burp Suite. He intercepted another POST request to that vulnerable endpoint.
He replaced the normal serialized data with their malicious payload. And hit Send.
A few seconds later, he got a response. The server had executed his command, he saw a directory listing of backend files.
That’s remote code execution.
At this point, Sia controls the server. He can read config files, extract credentials, pivot into the internal network.
Sia didn’t go that far—he stopped at proof of concept. But this was enough to prove the risk was real.
And it’s worth saying: this wasn’t some advanced zero-day.
This was a known vulnerability in a library that’s been around for years.
If your app deserializes whatever users send, without validation, you’re at risk.
So if you’re building, designing or maintaining Java applications, here’s what you need to do.
· Don’t deserialize untrusted data at all if you can avoid it.
· If you have to, use signing or encryption to ensure it hasn’t been tampered with.
· Use libraries like SerialKiller to allowlist safe classes and override ObjectInputStream to block dangerous classes.
· And always keep your dependencies up to date.
Because a single old library can open the door for full server compromise.
And for my fellow pentesters out there…
When you see serialized data—pay attention.
· Decode it.
· Look for magic headers.
· Fuzz it with invalid values and see if the server crashes or generates errors.
· Test with ysoserial or other tools to see if you can trigger code execution.
These bugs don’t always show up in scans. You have to look carefully.
It’s about understanding how developers trust data—and how you can break that trust.
Don’t forget, it only takes one malicious object to take over an entire server.
And before we wrap this one up—this finding came straight from a listener in the community.
So if you came across something during a pentest, don’t keep it to yourself.
You don’t need to polish it or write a full report. Just share what happened, what you tried, what worked, and what didn’t.
We’ll turn it into something the whole community can learn from.
You’ll find the Google Form link for submissions in the episode description. Let’s raise the bar together.
OUTRO
Every vulnerability we covered today shared one lesson: never trust what the client sends.
From password resets exposing user IDs, to shared session tokens hijacking logins, to deserialization bugs running attacker-controlled code—each was a simple oversight that led to full compromise.
And none of them would have been fixed without responsible disclosure. Because finding flaws is only half the job—helping teams fix them is what makes the internet safer for everyone.
If this episode helped you see security differently, share it with someone who’s building, testing, or defending systems. This might be the one lesson that keeps their users safe.
Let’s make cybersecurity knowledge accessible to all. See you in the next episode.