Hacked & Secured: Pentest Exploits & Mitigations

Ep. 4 – Exposed Secrets & Silent Takeovers: How Misconfigurations Open the Door to Attackers

Amin Malekpour Season 1 Episode 4

Exposed secrets, overlooked permissions, and credentials hiding in plain sight—each one leading to a critical breach.

In this episode, we break down three real-world pentest findings where a forgotten file, a misconfigured setting, and a leaked credential gave attackers full control. How did they happen? How can you find similar issues? And what can be done to stop them?

Listen now to learn how attackers exploit these mistakes—and how you can prevent them.

Want your pentest discovery featured? Submit your creative findings through the Google Form in the episode description, and we might showcase your finding in an upcoming episode!

🌍 Follow & Connect → LinkedIn, YouTube, Twitter, Instagram
📩 Submit Your Pentest Findings → https://forms.gle/7pPwjdaWnGYpQcA6A
📧 Feedback? Email Us podcast@quailu.com.au
🔗 Podcast Website → Website Link

INTRO
What if a single forgotten file could hand over access to private repositories?
What if one misconfigured permission let an attacker escalate from a basic user to full system control?
And what if a credential—hidden in plain sight—granted access to a network switch disrupting multiple businesses?
Each of these cases proves that the most devastating exploits don’t always start with complex hacking—sometimes, they start with a simple mistake.
I’m Amin Malekpour, and you are listening to Hacked & Secured: Pentest Exploits & Mitigations.

FINDING #1 - How a Forgotten File Exposed Private Repositories
You’re testing a packaged desktop application. You extract its contents, explore its structure, and then you see something strange—a forgotten .env file.
At first, it seems harmless, probably just some leftover configuration from development. But then you open it—and sitting inside is an active GitHub access token.
This wasn’t just any token. This key had push and pull access to private repositories of a major company.
That’s exactly what augustozanellato discovered on HackerOne—a critical security flaw that could have allowed an attacker to push malicious code into private repositories.
Let’s break it down.
It all started when the researcher extracted the contents of a packaged application using standard tooling. Inside the extracted files, they noticed a .env file typically used to store environment variables like API keys and authentication tokens.
Initially, they ignored it—it looked like just another leftover configuration file. But as they continued analyzing the app, they noticed something odd: the .env file wasn’t actually being used anywhere in the source code.
That raised a question: if the app never loaded it, why was it there?
So, they opened the file—and there it was: a GitHub access token.
Was the token still active? And if it was, what level of access did it have?
To find out, they used the token to authenticate against the GitHub API. A simple request confirmed the token was valid.
That was bad enough, but what came next made it much worse.
They queried the API to check the token’s access scope and quickly realized it wasn’t just a random leaked credential—it was an active key linked to a major organization.
They investigated further, testing the token’s permissions. What they found was alarming: read and write access to private repositories.


If you had this level of access, what would you do next? Would you try cloning repositories to see what’s inside? Would you attempt to escalate privileges? Would you inject a backdoor into the codebase?
With push access, an attacker could have modified the organization’s source code, injected backdoors, or introduced malicious dependencies—changes that could be distributed across production systems without immediate detection.
With pull access, they could have cloned private repositories, gaining access to sensitive intellectual property, unreleased features, API keys, or additional credentials buried in the code.
And worst of all, no one would have noticed until it was too late. The token had no extra verification requirements, so an attacker could have done all of this without ever needing a password or triggering an alert.
This wasn’t just an accidental leak—it was an open door to the company’s entire development pipeline and a supply chain compromise waiting to happen.


If companies are enforcing strict secret management policies, no. But in real-world pentesting, we still find forgotten .env files, exposed credentials, and hardcoded secrets inside production builds. This is why pentesters need to test beyond traditional vulnerabilities; secrets hidden in packaged files are an attack vector many companies overlook.


For developers, here’s what should have been done:
• Sensitive credentials should never be included in app files.
• Even if leaked, tokens should be time-limited to reduce long-term risk.
• Tokens should follow the principle of least privilege—for example, a token could be read-only instead of having push access.
• Automated detection should be in place to scan repositories for exposed secrets before deployment (using tools like GitHub Secret Scanning, TruffleHog, or Gitleaks).
Any one of these measures could have prevented this attack; together, they would have made it nearly impossible.

For Pentesters: How to Find Similar Issues
• Analyze extracted application files and look for forgotten development artifacts, logs, backup files, or staging configurations that might contain secrets.
• Enumerate API keys and test their scope, considering the broader supply chain impact if a token allows modifying production code.

This finding proves that sometimes, the most critical security risks come from simple mistakes left behind in production.

FINDING #2 - How Misconfigured Permissions Led to Full System Takeover
"An exposed GitHub token hidden inside an application? That’s bad enough. But what if I told you the next attack didn’t just expose credentials—it turned a simple misconfiguration into complete system takeover? No zero-days. No malware. Just one overlooked permission that let an attacker go from low-privileged access to NT AUTHORITY\SYSTEM. Let’s break it down."
You’re inside a Windows server running a collaboration platform. You have low-privilege access barely enough to do anything meaningful. But then you notice something—a configuration file that is readable and completely exposed.
This wasn’t just a minor misconfiguration; it was an open door to privilege escalation.
That’s exactly what matcluck reported on Bugcrowd—a critical security flaw that enabled an attacker to escalate from a low-privileged user to the highest level of access on a Windows machine.
It started with a misconfigured permissions issue. The server was running a widely used collaboration platform, and its database credentials were stored inside a configuration file. That alone wasn’t unusual—but here’s where things went wrong:
The .cfg.xml file was stored in a directory that any low-privileged user could read, meaning anyone with a basic account could extract the database username and password without needing special permissions.
Once the researcher confirmed they could read the file, they extracted the credentials for the PostgreS database used by the application.
Now, they had direct database access. Using those credentials, they connected to the database and started exploring until they found the user table containing all stored accounts.
Then, an opportunity appeared. Although passwords were hashed, the researcher decided to try adding their own account instead of cracking a password. They created a new admin user named "bugcrowd" and set its hashed password to a value they controlled. It worked, and the system recognized their fake user as a legitimate admin.
With this, they logged into the platform’s web interface—no hacking tools, no brute force—just a clean admin login with the credentials they had set.
But they weren’t stopping there; there was one more goal—full system control.

At this point, you’ve gone from a low-privileged user to an application administrator by inserting your own account into the database. What’s your next move? Would you exfiltrate data, look for remote command execution, or escalate further to gain system-wide control?

Here’s what the attacker did:
With administrator access, they had the capability to execute code directly within the system. They explored the platform’s administrative tools and discovered a built-in scripting plugin that allowed administrators to run scripts.
Using this feature, they crafted a script that spawned a reverse shell, creating a direct connection between their machine and the compromised server.
Once the shell connected, they quickly checked their privileges and discovered that among the listed permissions was SeImpersonatePrivilege—a Windows privilege that allows a user to impersonate higher-privileged accounts under the right conditions.
To escalate further from administrator to full system control, they turned to PrintSpoofer, a well-known Windows privilege escalation technique. This method exploits the Print Spooler service—which normally runs with SYSTEM-level privileges—to trick the system into executing their code with SYSTEM-level rights.
Just like that, they had full control. The system recognized them as NT AUTHORITY\SYSTEM, giving them unrestricted access to everything.
At this point, nothing was off-limits—they could modify system files, extract sensitive data, install persistent backdoors, or pivot deeper into the network. What began as a misconfigured file had led to complete system takeover.

The attacker’s mindset was:

  1. Look for misconfigured file permissions (the opportunity).
  2. Extract database credentials from an unprotected configuration file (the weakness).
  3. Use those credentials to insert a new admin account (the exploit).
  4. Leverage a known privilege escalation technique to gain SYSTEM-level access (full compromise).


Imagine you’re a system administrator responsible for securing this platform. How would you prevent this?
• Enforce stricter file permissions and never store sensitive credentials in plain text within application config files—instead, use a secure vault with strict access controls.
• Even if credentials are exposed, enforce multi-factor authentication (MFA) and IP-based restrictions to prevent unauthorized database access.
• Ensure that user creation requires additional authentication or an approval process, so attackers cannot simply add a new admin account.
• Lock down the privilege escalation path by hardening Windows privilege settings, restricting token manipulation, and disabling unnecessary services.


Any one of these defenses could have slowed the attack; together, they would have made the compromise nearly impossible.

For Pentesters: How to Find Similar Issues:
• Check for misconfigured file permissions and hardcoded credentials in application files—logs, backup files, and leftover development artifacts often expose sensitive information.
• Identify privilege escalation opportunities by looking for services or users granted SeImpersonatePrivilege.
• Abuse built-in scripting or administrative tools; many enterprise applications include script execution features that, if misused, provide a direct path to remote code execution.

This remarkable finding demonstrates how a single misconfiguration in the wrong place can lead to a full system compromise.

FINDING #3 - The Credentials That Gave Access to a Network Switch
"We started with an exposed GitHub token hidden inside an application, just waiting to be found. Then, we saw how a simple misconfiguration led to full system takeover. But what if I told you the next attack required even less effort? No need to reverse-engineer an app. No need to escalate privileges. Just credentials left exposed in plain sight, giving an attacker direct access to critical infrastructure. Let’s get into it.”
You're handed a list of external IP addresses to test. You run the usual recon—Shodan scans, enumeration, third-party tools—but nothing stands out. No obvious vulnerabilities, no open doors.
Then you decide to take a different approach. Instead of attacking the infrastructure directly, you start looking for exposed credentials.
That’s when you find it: a publicly accessible GitHub repository.
You search through its files and there they are—a set of login credentials, just sitting in plain text.
This was no ordinary set of credentials. It provided access to a service provider’s network switch—one that connected multiple companies.
That’s exactly what KarmicCircle uncovered and contributed to our podcast as part of our community contribution program—a security flaw that could have allowed an attacker to disable network ports, disrupt services, and potentially impact multiple businesses.
Let’s break it down.

This all started during an external penetration test engagement. The target had several exposed IP addresses, but initial reconnaissance didn’t reveal any exploitable services.
Instead of scanning for vulnerabilities, the researcher turned to Open Source Intelligence (OSINT). They used Google dorks to search for exposed files, forgotten documentation, and anything that might contain useful information.
That’s when they found it—a publicly accessible GitHub repository linked to the target organization.
Inside, they discovered a configuration file with hardcoded credentials.
At first glance, it seemed old. But the key question was: was it still valid?
They tested the credentials on various services. Some attempts failed; some returned login errors. Then one worked—they had successfully logged into a network switch’s web interface.

At this point, you’ve logged into a network switch that controls multiple businesses. What’s your next move?
Would you start mapping out affected companies? Would you try to escalate privileges? Would you check for additional misconfigurations?
Here’s what could have happened:
• An attacker could have disabled network ports, cutting off services for multiple businesses.
• They might have intercepted and rerouted traffic, enabling man-in-the-middle attacks.
• Or they could have explored lateral movement opportunities, using this foothold to compromise additional systems.
One simple mistake—a forgotten GitHub repository—could have caused a major disruption; it was a potential denial-of-service attack at the infrastructure level.

Imagine you’re responsible for securing your company’s network infrastructure. How do you prevent this from happening?
Would you set up automated credential scanning? Enforce stronger access controls? Continuously monitor public repositories for leaks?
Here’s what should have been done:
• Credentials should never be exposed in a public repository; all sensitive credentials must be stored securely using a vault like HashiCorp.
• Access to critical infrastructure should be restricted with additional security layers like multi-factor authentication (MFA), IP allowlisting, and device-based restrictions.
• Public repositories should be frequently audited to detect accidental leaks before attackers find them.

• Use OSINT techniques to find exposed credentials in GitHub, GitLab, Pastebin, and public issue trackers.
• Test whether leaked credentials are still valid, as many organizations don’t rotate them frequently.
• Check what services the credentials unlock—a leaked password might not seem dangerous until it grants access to network infrastructure, admin panels, or cloud environments.

KarmicCircle—thank you for sharing this finding with us.
To everyone submitting discoveries, your contributions make this podcast stronger. We appreciate every report, every creative exploit, and every overlooked misconfiguration you send our way.
If you’ve uncovered something interesting, send it in—we’re always looking to feature great findings from the community. Simply provide as much detail as possible while respecting client privacy. You’ll find the Google Form link in the description.

OUTRO
Every exploit today wasn’t an advanced zero-day—it was a preventable mistake.
That’s why pentesting matters—to catch flaws before attackers do. But responsible disclosure is just as crucial; reporting vulnerabilities properly makes the internet safer.
If you found this valuable, share it with someone who may appreciate it. The more we share, the stronger we become.
Let’s make cybersecurity knowledge accessible to all. See you in the next episode.

People on this episode